Sep 6 01:19:59.997602 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 6 01:19:59.997620 kernel: Linux version 5.15.190-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Sep 5 23:00:12 -00 2025 Sep 6 01:19:59.997628 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Sep 6 01:19:59.997635 kernel: printk: bootconsole [pl11] enabled Sep 6 01:20:00.002095 kernel: efi: EFI v2.70 by EDK II Sep 6 01:20:00.002104 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3761cf98 Sep 6 01:20:00.002111 kernel: random: crng init done Sep 6 01:20:00.002117 kernel: ACPI: Early table checksum verification disabled Sep 6 01:20:00.002122 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Sep 6 01:20:00.002128 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 6 01:20:00.002134 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 6 01:20:00.002139 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Sep 6 01:20:00.002149 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 6 01:20:00.002155 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 6 01:20:00.002161 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 6 01:20:00.002167 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 6 01:20:00.002173 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 6 01:20:00.002180 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 6 01:20:00.002186 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Sep 6 01:20:00.002192 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 6 01:20:00.002198 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Sep 6 01:20:00.002204 kernel: NUMA: Failed to initialise from firmware Sep 6 01:20:00.002209 kernel: NUMA: Faking a node at [mem 0x0000000000000000-0x00000001bfffffff] Sep 6 01:20:00.002215 kernel: NUMA: NODE_DATA [mem 0x1bf7f2900-0x1bf7f7fff] Sep 6 01:20:00.002221 kernel: Zone ranges: Sep 6 01:20:00.002227 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Sep 6 01:20:00.002233 kernel: DMA32 empty Sep 6 01:20:00.002238 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Sep 6 01:20:00.002245 kernel: Movable zone start for each node Sep 6 01:20:00.002251 kernel: Early memory node ranges Sep 6 01:20:00.002257 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Sep 6 01:20:00.002263 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Sep 6 01:20:00.002268 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Sep 6 01:20:00.002274 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Sep 6 01:20:00.002280 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Sep 6 01:20:00.002286 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Sep 6 01:20:00.002291 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Sep 6 01:20:00.002297 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Sep 6 01:20:00.002303 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Sep 6 01:20:00.002309 kernel: psci: probing for conduit method from ACPI. Sep 6 01:20:00.002318 kernel: psci: PSCIv1.1 detected in firmware. Sep 6 01:20:00.002324 kernel: psci: Using standard PSCI v0.2 function IDs Sep 6 01:20:00.002331 kernel: psci: MIGRATE_INFO_TYPE not supported. Sep 6 01:20:00.002337 kernel: psci: SMC Calling Convention v1.4 Sep 6 01:20:00.002343 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node -1 Sep 6 01:20:00.002350 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node -1 Sep 6 01:20:00.002356 kernel: percpu: Embedded 30 pages/cpu s82968 r8192 d31720 u122880 Sep 6 01:20:00.002363 kernel: pcpu-alloc: s82968 r8192 d31720 u122880 alloc=30*4096 Sep 6 01:20:00.002369 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 6 01:20:00.002375 kernel: Detected PIPT I-cache on CPU0 Sep 6 01:20:00.002382 kernel: CPU features: detected: GIC system register CPU interface Sep 6 01:20:00.002388 kernel: CPU features: detected: Hardware dirty bit management Sep 6 01:20:00.002394 kernel: CPU features: detected: Spectre-BHB Sep 6 01:20:00.002400 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 6 01:20:00.002406 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 6 01:20:00.002412 kernel: CPU features: detected: ARM erratum 1418040 Sep 6 01:20:00.002419 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Sep 6 01:20:00.002426 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 6 01:20:00.002432 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Sep 6 01:20:00.002438 kernel: Policy zone: Normal Sep 6 01:20:00.002445 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=5cb382ab59aa1336098b36da02e2d4491706a6fda80ee56c4ff8582cce9206a4 Sep 6 01:20:00.002452 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 6 01:20:00.002458 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 6 01:20:00.002464 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 6 01:20:00.002471 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 6 01:20:00.002477 kernel: software IO TLB: mapped [mem 0x000000003a550000-0x000000003e550000] (64MB) Sep 6 01:20:00.002483 kernel: Memory: 3986876K/4194160K available (9792K kernel code, 2094K rwdata, 7592K rodata, 36416K init, 777K bss, 207284K reserved, 0K cma-reserved) Sep 6 01:20:00.002491 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 6 01:20:00.002497 kernel: trace event string verifier disabled Sep 6 01:20:00.002503 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 6 01:20:00.002510 kernel: rcu: RCU event tracing is enabled. Sep 6 01:20:00.002516 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 6 01:20:00.002522 kernel: Trampoline variant of Tasks RCU enabled. Sep 6 01:20:00.002529 kernel: Tracing variant of Tasks RCU enabled. Sep 6 01:20:00.002535 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 6 01:20:00.002541 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 6 01:20:00.002547 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 6 01:20:00.002553 kernel: GICv3: 960 SPIs implemented Sep 6 01:20:00.002560 kernel: GICv3: 0 Extended SPIs implemented Sep 6 01:20:00.002567 kernel: GICv3: Distributor has no Range Selector support Sep 6 01:20:00.002573 kernel: Root IRQ handler: gic_handle_irq Sep 6 01:20:00.002579 kernel: GICv3: 16 PPIs implemented Sep 6 01:20:00.002585 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Sep 6 01:20:00.002591 kernel: ITS: No ITS available, not enabling LPIs Sep 6 01:20:00.002597 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 6 01:20:00.002604 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 6 01:20:00.002610 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 6 01:20:00.002616 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 6 01:20:00.002622 kernel: Console: colour dummy device 80x25 Sep 6 01:20:00.002630 kernel: printk: console [tty1] enabled Sep 6 01:20:00.002637 kernel: ACPI: Core revision 20210730 Sep 6 01:20:00.002657 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 6 01:20:00.002663 kernel: pid_max: default: 32768 minimum: 301 Sep 6 01:20:00.002669 kernel: LSM: Security Framework initializing Sep 6 01:20:00.002676 kernel: SELinux: Initializing. Sep 6 01:20:00.002682 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 6 01:20:00.002688 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 6 01:20:00.002695 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Sep 6 01:20:00.002703 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Sep 6 01:20:00.002709 kernel: rcu: Hierarchical SRCU implementation. Sep 6 01:20:00.002715 kernel: Remapping and enabling EFI services. Sep 6 01:20:00.002721 kernel: smp: Bringing up secondary CPUs ... Sep 6 01:20:00.002728 kernel: Detected PIPT I-cache on CPU1 Sep 6 01:20:00.002734 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Sep 6 01:20:00.002741 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 6 01:20:00.002747 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 6 01:20:00.002753 kernel: smp: Brought up 1 node, 2 CPUs Sep 6 01:20:00.002760 kernel: SMP: Total of 2 processors activated. Sep 6 01:20:00.002767 kernel: CPU features: detected: 32-bit EL0 Support Sep 6 01:20:00.002774 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Sep 6 01:20:00.002780 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 6 01:20:00.002786 kernel: CPU features: detected: CRC32 instructions Sep 6 01:20:00.002793 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 6 01:20:00.002799 kernel: CPU features: detected: LSE atomic instructions Sep 6 01:20:00.002805 kernel: CPU features: detected: Privileged Access Never Sep 6 01:20:00.002811 kernel: CPU: All CPU(s) started at EL1 Sep 6 01:20:00.002818 kernel: alternatives: patching kernel code Sep 6 01:20:00.002825 kernel: devtmpfs: initialized Sep 6 01:20:00.002836 kernel: KASLR enabled Sep 6 01:20:00.002843 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 6 01:20:00.002851 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 6 01:20:00.002857 kernel: pinctrl core: initialized pinctrl subsystem Sep 6 01:20:00.002864 kernel: SMBIOS 3.1.0 present. Sep 6 01:20:00.002871 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Sep 6 01:20:00.002877 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 6 01:20:00.002884 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 6 01:20:00.002892 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 6 01:20:00.002899 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 6 01:20:00.002906 kernel: audit: initializing netlink subsys (disabled) Sep 6 01:20:00.002912 kernel: audit: type=2000 audit(0.084:1): state=initialized audit_enabled=0 res=1 Sep 6 01:20:00.002919 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 6 01:20:00.002926 kernel: cpuidle: using governor menu Sep 6 01:20:00.002932 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 6 01:20:00.002940 kernel: ASID allocator initialised with 32768 entries Sep 6 01:20:00.002947 kernel: ACPI: bus type PCI registered Sep 6 01:20:00.002954 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 6 01:20:00.002960 kernel: Serial: AMBA PL011 UART driver Sep 6 01:20:00.002967 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 6 01:20:00.002973 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Sep 6 01:20:00.002980 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 6 01:20:00.002987 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Sep 6 01:20:00.002993 kernel: cryptd: max_cpu_qlen set to 1000 Sep 6 01:20:00.003001 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 6 01:20:00.003008 kernel: ACPI: Added _OSI(Module Device) Sep 6 01:20:00.003014 kernel: ACPI: Added _OSI(Processor Device) Sep 6 01:20:00.003021 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 6 01:20:00.003028 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 6 01:20:00.003034 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 6 01:20:00.003041 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 6 01:20:00.003047 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 6 01:20:00.003054 kernel: ACPI: Interpreter enabled Sep 6 01:20:00.003062 kernel: ACPI: Using GIC for interrupt routing Sep 6 01:20:00.003069 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Sep 6 01:20:00.003075 kernel: printk: console [ttyAMA0] enabled Sep 6 01:20:00.003082 kernel: printk: bootconsole [pl11] disabled Sep 6 01:20:00.003089 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Sep 6 01:20:00.003096 kernel: iommu: Default domain type: Translated Sep 6 01:20:00.003102 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 6 01:20:00.003109 kernel: vgaarb: loaded Sep 6 01:20:00.003115 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 6 01:20:00.003122 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 6 01:20:00.003130 kernel: PTP clock support registered Sep 6 01:20:00.003137 kernel: Registered efivars operations Sep 6 01:20:00.003143 kernel: No ACPI PMU IRQ for CPU0 Sep 6 01:20:00.003150 kernel: No ACPI PMU IRQ for CPU1 Sep 6 01:20:00.003156 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 6 01:20:00.003163 kernel: VFS: Disk quotas dquot_6.6.0 Sep 6 01:20:00.003170 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 6 01:20:00.003176 kernel: pnp: PnP ACPI init Sep 6 01:20:00.003183 kernel: pnp: PnP ACPI: found 0 devices Sep 6 01:20:00.003191 kernel: NET: Registered PF_INET protocol family Sep 6 01:20:00.003197 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 6 01:20:00.003204 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 6 01:20:00.003211 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 6 01:20:00.003218 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 6 01:20:00.003225 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Sep 6 01:20:00.003231 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 6 01:20:00.003238 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 6 01:20:00.003246 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 6 01:20:00.003253 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 6 01:20:00.003260 kernel: PCI: CLS 0 bytes, default 64 Sep 6 01:20:00.003267 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Sep 6 01:20:00.003273 kernel: kvm [1]: HYP mode not available Sep 6 01:20:00.003280 kernel: Initialise system trusted keyrings Sep 6 01:20:00.003287 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 6 01:20:00.003293 kernel: Key type asymmetric registered Sep 6 01:20:00.003300 kernel: Asymmetric key parser 'x509' registered Sep 6 01:20:00.003308 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 6 01:20:00.003314 kernel: io scheduler mq-deadline registered Sep 6 01:20:00.003321 kernel: io scheduler kyber registered Sep 6 01:20:00.003327 kernel: io scheduler bfq registered Sep 6 01:20:00.003334 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 6 01:20:00.003340 kernel: thunder_xcv, ver 1.0 Sep 6 01:20:00.003347 kernel: thunder_bgx, ver 1.0 Sep 6 01:20:00.003353 kernel: nicpf, ver 1.0 Sep 6 01:20:00.003360 kernel: nicvf, ver 1.0 Sep 6 01:20:00.003479 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 6 01:20:00.003542 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-06T01:19:59 UTC (1757121599) Sep 6 01:20:00.003551 kernel: efifb: probing for efifb Sep 6 01:20:00.003558 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 6 01:20:00.003564 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 6 01:20:00.003571 kernel: efifb: scrolling: redraw Sep 6 01:20:00.003577 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 6 01:20:00.003584 kernel: Console: switching to colour frame buffer device 128x48 Sep 6 01:20:00.003592 kernel: fb0: EFI VGA frame buffer device Sep 6 01:20:00.003599 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Sep 6 01:20:00.003605 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 6 01:20:00.003612 kernel: NET: Registered PF_INET6 protocol family Sep 6 01:20:00.003619 kernel: Segment Routing with IPv6 Sep 6 01:20:00.003625 kernel: In-situ OAM (IOAM) with IPv6 Sep 6 01:20:00.003632 kernel: NET: Registered PF_PACKET protocol family Sep 6 01:20:00.003657 kernel: Key type dns_resolver registered Sep 6 01:20:00.003664 kernel: registered taskstats version 1 Sep 6 01:20:00.003671 kernel: Loading compiled-in X.509 certificates Sep 6 01:20:00.003680 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.190-flatcar: 72ab5ba99c2368429c7a4d04fccfc5a39dd84386' Sep 6 01:20:00.003686 kernel: Key type .fscrypt registered Sep 6 01:20:00.003693 kernel: Key type fscrypt-provisioning registered Sep 6 01:20:00.003700 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 6 01:20:00.003706 kernel: ima: Allocated hash algorithm: sha1 Sep 6 01:20:00.003713 kernel: ima: No architecture policies found Sep 6 01:20:00.003719 kernel: clk: Disabling unused clocks Sep 6 01:20:00.003726 kernel: Freeing unused kernel memory: 36416K Sep 6 01:20:00.003734 kernel: Run /init as init process Sep 6 01:20:00.003740 kernel: with arguments: Sep 6 01:20:00.003747 kernel: /init Sep 6 01:20:00.003753 kernel: with environment: Sep 6 01:20:00.003760 kernel: HOME=/ Sep 6 01:20:00.003766 kernel: TERM=linux Sep 6 01:20:00.003773 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 6 01:20:00.003781 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 6 01:20:00.003791 systemd[1]: Detected virtualization microsoft. Sep 6 01:20:00.003799 systemd[1]: Detected architecture arm64. Sep 6 01:20:00.003806 systemd[1]: Running in initrd. Sep 6 01:20:00.003813 systemd[1]: No hostname configured, using default hostname. Sep 6 01:20:00.003819 systemd[1]: Hostname set to . Sep 6 01:20:00.003827 systemd[1]: Initializing machine ID from random generator. Sep 6 01:20:00.003834 systemd[1]: Queued start job for default target initrd.target. Sep 6 01:20:00.003841 systemd[1]: Started systemd-ask-password-console.path. Sep 6 01:20:00.003849 systemd[1]: Reached target cryptsetup.target. Sep 6 01:20:00.003856 systemd[1]: Reached target paths.target. Sep 6 01:20:00.003863 systemd[1]: Reached target slices.target. Sep 6 01:20:00.003870 systemd[1]: Reached target swap.target. Sep 6 01:20:00.003877 systemd[1]: Reached target timers.target. Sep 6 01:20:00.003884 systemd[1]: Listening on iscsid.socket. Sep 6 01:20:00.003891 systemd[1]: Listening on iscsiuio.socket. Sep 6 01:20:00.003898 systemd[1]: Listening on systemd-journald-audit.socket. Sep 6 01:20:00.003907 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 6 01:20:00.003914 systemd[1]: Listening on systemd-journald.socket. Sep 6 01:20:00.003921 systemd[1]: Listening on systemd-networkd.socket. Sep 6 01:20:00.003928 systemd[1]: Listening on systemd-udevd-control.socket. Sep 6 01:20:00.003935 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 6 01:20:00.003942 systemd[1]: Reached target sockets.target. Sep 6 01:20:00.003949 systemd[1]: Starting kmod-static-nodes.service... Sep 6 01:20:00.003956 systemd[1]: Finished network-cleanup.service. Sep 6 01:20:00.003963 systemd[1]: Starting systemd-fsck-usr.service... Sep 6 01:20:00.003971 systemd[1]: Starting systemd-journald.service... Sep 6 01:20:00.003978 systemd[1]: Starting systemd-modules-load.service... Sep 6 01:20:00.003985 systemd[1]: Starting systemd-resolved.service... Sep 6 01:20:00.003995 systemd-journald[276]: Journal started Sep 6 01:20:00.004036 systemd-journald[276]: Runtime Journal (/run/log/journal/a01ff2a426ef40b8be8b3260eb3d4f0b) is 8.0M, max 78.5M, 70.5M free. Sep 6 01:20:00.000019 systemd-modules-load[277]: Inserted module 'overlay' Sep 6 01:20:00.023244 systemd[1]: Starting systemd-vconsole-setup.service... Sep 6 01:20:00.044556 systemd[1]: Started systemd-journald.service. Sep 6 01:20:00.044614 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 6 01:20:00.035860 systemd-resolved[278]: Positive Trust Anchors: Sep 6 01:20:00.091435 kernel: audit: type=1130 audit(1757121600.049:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:00.091460 kernel: Bridge firewalling registered Sep 6 01:20:00.091468 kernel: audit: type=1130 audit(1757121600.072:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:00.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:00.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:00.035875 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 01:20:00.122100 kernel: audit: type=1130 audit(1757121600.103:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:00.122127 kernel: SCSI subsystem initialized Sep 6 01:20:00.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:00.035905 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 6 01:20:00.163658 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 6 01:20:00.038023 systemd-resolved[278]: Defaulting to hostname 'linux'. Sep 6 01:20:00.195732 kernel: device-mapper: uevent: version 1.0.3 Sep 6 01:20:00.195755 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 6 01:20:00.195764 kernel: audit: type=1130 audit(1757121600.179:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:00.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:00.049514 systemd[1]: Started systemd-resolved.service. Sep 6 01:20:00.217710 kernel: audit: type=1130 audit(1757121600.199:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:00.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:00.073024 systemd[1]: Finished kmod-static-nodes.service. Sep 6 01:20:00.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:00.077006 systemd-modules-load[277]: Inserted module 'br_netfilter' Sep 6 01:20:00.250749 kernel: audit: type=1130 audit(1757121600.223:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:00.118949 systemd[1]: Finished systemd-fsck-usr.service. Sep 6 01:20:00.194721 systemd-modules-load[277]: Inserted module 'dm_multipath' Sep 6 01:20:00.194759 systemd[1]: Finished systemd-vconsole-setup.service. Sep 6 01:20:00.199750 systemd[1]: Finished systemd-modules-load.service. Sep 6 01:20:00.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:00.224117 systemd[1]: Reached target nss-lookup.target. Sep 6 01:20:00.250324 systemd[1]: Starting dracut-cmdline-ask.service... Sep 6 01:20:00.338758 kernel: audit: type=1130 audit(1757121600.286:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:00.338783 kernel: audit: type=1130 audit(1757121600.287:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:00.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:00.255022 systemd[1]: Starting systemd-sysctl.service... Sep 6 01:20:00.259689 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 6 01:20:00.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:00.282749 systemd[1]: Finished systemd-sysctl.service. Sep 6 01:20:00.287801 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 6 01:20:00.378340 kernel: audit: type=1130 audit(1757121600.346:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:00.338929 systemd[1]: Finished dracut-cmdline-ask.service. Sep 6 01:20:00.372881 systemd[1]: Starting dracut-cmdline.service... Sep 6 01:20:00.394713 dracut-cmdline[299]: dracut-dracut-053 Sep 6 01:20:00.398618 dracut-cmdline[299]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=5cb382ab59aa1336098b36da02e2d4491706a6fda80ee56c4ff8582cce9206a4 Sep 6 01:20:00.489681 kernel: Loading iSCSI transport class v2.0-870. Sep 6 01:20:00.505678 kernel: iscsi: registered transport (tcp) Sep 6 01:20:00.525648 kernel: iscsi: registered transport (qla4xxx) Sep 6 01:20:00.525711 kernel: QLogic iSCSI HBA Driver Sep 6 01:20:00.559760 systemd[1]: Finished dracut-cmdline.service. Sep 6 01:20:00.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:00.566026 systemd[1]: Starting dracut-pre-udev.service... Sep 6 01:20:00.617670 kernel: raid6: neonx8 gen() 13816 MB/s Sep 6 01:20:00.638652 kernel: raid6: neonx8 xor() 10775 MB/s Sep 6 01:20:00.658650 kernel: raid6: neonx4 gen() 13549 MB/s Sep 6 01:20:00.678649 kernel: raid6: neonx4 xor() 10982 MB/s Sep 6 01:20:00.699650 kernel: raid6: neonx2 gen() 13082 MB/s Sep 6 01:20:00.719649 kernel: raid6: neonx2 xor() 10251 MB/s Sep 6 01:20:00.739653 kernel: raid6: neonx1 gen() 10611 MB/s Sep 6 01:20:00.761649 kernel: raid6: neonx1 xor() 8789 MB/s Sep 6 01:20:00.781649 kernel: raid6: int64x8 gen() 6270 MB/s Sep 6 01:20:00.801653 kernel: raid6: int64x8 xor() 3545 MB/s Sep 6 01:20:00.822656 kernel: raid6: int64x4 gen() 7226 MB/s Sep 6 01:20:00.842650 kernel: raid6: int64x4 xor() 3854 MB/s Sep 6 01:20:00.863653 kernel: raid6: int64x2 gen() 6155 MB/s Sep 6 01:20:00.884653 kernel: raid6: int64x2 xor() 3318 MB/s Sep 6 01:20:00.905651 kernel: raid6: int64x1 gen() 5049 MB/s Sep 6 01:20:00.929813 kernel: raid6: int64x1 xor() 2647 MB/s Sep 6 01:20:00.929824 kernel: raid6: using algorithm neonx8 gen() 13816 MB/s Sep 6 01:20:00.929833 kernel: raid6: .... xor() 10775 MB/s, rmw enabled Sep 6 01:20:00.933802 kernel: raid6: using neon recovery algorithm Sep 6 01:20:00.952657 kernel: xor: measuring software checksum speed Sep 6 01:20:00.960421 kernel: 8regs : 16155 MB/sec Sep 6 01:20:00.960443 kernel: 32regs : 20733 MB/sec Sep 6 01:20:00.963999 kernel: arm64_neon : 27946 MB/sec Sep 6 01:20:00.964008 kernel: xor: using function: arm64_neon (27946 MB/sec) Sep 6 01:20:01.027661 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Sep 6 01:20:01.037222 systemd[1]: Finished dracut-pre-udev.service. Sep 6 01:20:01.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:01.044000 audit: BPF prog-id=7 op=LOAD Sep 6 01:20:01.044000 audit: BPF prog-id=8 op=LOAD Sep 6 01:20:01.045320 systemd[1]: Starting systemd-udevd.service... Sep 6 01:20:01.062284 systemd-udevd[476]: Using default interface naming scheme 'v252'. Sep 6 01:20:01.068860 systemd[1]: Started systemd-udevd.service. Sep 6 01:20:01.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:01.078948 systemd[1]: Starting dracut-pre-trigger.service... Sep 6 01:20:01.089320 dracut-pre-trigger[497]: rd.md=0: removing MD RAID activation Sep 6 01:20:01.119751 systemd[1]: Finished dracut-pre-trigger.service. Sep 6 01:20:01.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:01.125042 systemd[1]: Starting systemd-udev-trigger.service... Sep 6 01:20:01.157974 systemd[1]: Finished systemd-udev-trigger.service. Sep 6 01:20:01.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:01.213660 kernel: hv_vmbus: Vmbus version:5.3 Sep 6 01:20:01.219664 kernel: hv_vmbus: registering driver hid_hyperv Sep 6 01:20:01.241817 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Sep 6 01:20:01.241847 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 6 01:20:01.241858 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 6 01:20:01.245957 kernel: hv_vmbus: registering driver hv_netvsc Sep 6 01:20:01.245984 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Sep 6 01:20:01.264215 kernel: hv_vmbus: registering driver hv_storvsc Sep 6 01:20:01.270650 kernel: scsi host0: storvsc_host_t Sep 6 01:20:01.274651 kernel: scsi host1: storvsc_host_t Sep 6 01:20:01.288525 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Sep 6 01:20:01.288570 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Sep 6 01:20:01.304211 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Sep 6 01:20:01.304906 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 6 01:20:01.304926 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Sep 6 01:20:01.322794 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Sep 6 01:20:01.345510 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Sep 6 01:20:01.345610 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 6 01:20:01.345710 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Sep 6 01:20:01.345789 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Sep 6 01:20:01.345880 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 6 01:20:01.345897 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 6 01:20:01.379084 kernel: hv_netvsc 000d3afd-1fc0-000d-3afd-1fc0000d3afd eth0: VF slot 1 added Sep 6 01:20:01.393181 kernel: hv_vmbus: registering driver hv_pci Sep 6 01:20:01.393219 kernel: hv_pci 8933d67e-670c-48a1-9fc0-ce3e1a537ac0: PCI VMBus probing: Using version 0x10004 Sep 6 01:20:01.611099 kernel: hv_pci 8933d67e-670c-48a1-9fc0-ce3e1a537ac0: PCI host bridge to bus 670c:00 Sep 6 01:20:01.611199 kernel: pci_bus 670c:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Sep 6 01:20:01.611296 kernel: pci_bus 670c:00: No busn resource found for root bus, will use [bus 00-ff] Sep 6 01:20:01.611372 kernel: pci 670c:00:02.0: [15b3:1018] type 00 class 0x020000 Sep 6 01:20:01.611467 kernel: pci 670c:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 6 01:20:01.611547 kernel: pci 670c:00:02.0: enabling Extended Tags Sep 6 01:20:01.611628 kernel: pci 670c:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 670c:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Sep 6 01:20:01.611735 kernel: pci_bus 670c:00: busn_res: [bus 00-ff] end is updated to 00 Sep 6 01:20:01.611810 kernel: pci 670c:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 6 01:20:01.646689 kernel: mlx5_core 670c:00:02.0: enabling device (0000 -> 0002) Sep 6 01:20:01.954730 kernel: mlx5_core 670c:00:02.0: firmware version: 16.31.2424 Sep 6 01:20:01.954839 kernel: mlx5_core 670c:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0) Sep 6 01:20:01.954918 kernel: hv_netvsc 000d3afd-1fc0-000d-3afd-1fc0000d3afd eth0: VF registering: eth1 Sep 6 01:20:01.954997 kernel: mlx5_core 670c:00:02.0 eth1: joined to eth0 Sep 6 01:20:01.931519 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 6 01:20:01.967414 kernel: mlx5_core 670c:00:02.0 enP26380s1: renamed from eth1 Sep 6 01:20:01.974657 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (543) Sep 6 01:20:01.989077 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 6 01:20:02.088560 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 6 01:20:02.099000 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 6 01:20:02.106733 systemd[1]: Starting disk-uuid.service... Sep 6 01:20:02.141507 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 6 01:20:03.138293 disk-uuid[598]: The operation has completed successfully. Sep 6 01:20:03.143204 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 6 01:20:03.190407 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 6 01:20:03.193849 systemd[1]: Finished disk-uuid.service. Sep 6 01:20:03.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:03.198000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:03.207187 systemd[1]: Starting verity-setup.service... Sep 6 01:20:03.244671 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 6 01:20:03.401523 systemd[1]: Found device dev-mapper-usr.device. Sep 6 01:20:03.406582 systemd[1]: Mounting sysusr-usr.mount... Sep 6 01:20:03.418742 systemd[1]: Finished verity-setup.service. Sep 6 01:20:03.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:03.472653 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 6 01:20:03.473305 systemd[1]: Mounted sysusr-usr.mount. Sep 6 01:20:03.477420 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 6 01:20:03.478224 systemd[1]: Starting ignition-setup.service... Sep 6 01:20:03.492233 systemd[1]: Starting parse-ip-for-networkd.service... Sep 6 01:20:03.517454 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 6 01:20:03.517496 kernel: BTRFS info (device sda6): using free space tree Sep 6 01:20:03.522755 kernel: BTRFS info (device sda6): has skinny extents Sep 6 01:20:03.568365 systemd[1]: Finished parse-ip-for-networkd.service. Sep 6 01:20:03.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:03.577000 audit: BPF prog-id=9 op=LOAD Sep 6 01:20:03.578161 systemd[1]: Starting systemd-networkd.service... Sep 6 01:20:03.590534 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 6 01:20:03.607357 systemd-networkd[873]: lo: Link UP Sep 6 01:20:03.607368 systemd-networkd[873]: lo: Gained carrier Sep 6 01:20:03.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:03.608127 systemd-networkd[873]: Enumeration completed Sep 6 01:20:03.610481 systemd[1]: Started systemd-networkd.service. Sep 6 01:20:03.611149 systemd-networkd[873]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 01:20:03.615284 systemd[1]: Reached target network.target. Sep 6 01:20:03.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:03.626082 systemd[1]: Starting iscsiuio.service... Sep 6 01:20:03.651435 iscsid[880]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 6 01:20:03.651435 iscsid[880]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Sep 6 01:20:03.651435 iscsid[880]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 6 01:20:03.651435 iscsid[880]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 6 01:20:03.651435 iscsid[880]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 6 01:20:03.651435 iscsid[880]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 6 01:20:03.651435 iscsid[880]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 6 01:20:03.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:03.634008 systemd[1]: Started iscsiuio.service. Sep 6 01:20:03.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:03.647449 systemd[1]: Starting iscsid.service... Sep 6 01:20:03.661365 systemd[1]: Started iscsid.service. Sep 6 01:20:03.695828 systemd[1]: Starting dracut-initqueue.service... Sep 6 01:20:03.724105 systemd[1]: Finished dracut-initqueue.service. Sep 6 01:20:03.731484 systemd[1]: Reached target remote-fs-pre.target. Sep 6 01:20:03.739269 systemd[1]: Reached target remote-cryptsetup.target. Sep 6 01:20:03.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:03.747791 systemd[1]: Reached target remote-fs.target. Sep 6 01:20:03.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:03.760200 systemd[1]: Starting dracut-pre-mount.service... Sep 6 01:20:03.775852 systemd[1]: Finished ignition-setup.service. Sep 6 01:20:03.783502 systemd[1]: Finished dracut-pre-mount.service. Sep 6 01:20:03.795770 systemd[1]: Starting ignition-fetch-offline.service... Sep 6 01:20:03.825657 kernel: mlx5_core 670c:00:02.0 enP26380s1: Link up Sep 6 01:20:03.902723 kernel: hv_netvsc 000d3afd-1fc0-000d-3afd-1fc0000d3afd eth0: Data path switched to VF: enP26380s1 Sep 6 01:20:03.902897 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 6 01:20:03.903102 systemd-networkd[873]: enP26380s1: Link UP Sep 6 01:20:03.903188 systemd-networkd[873]: eth0: Link UP Sep 6 01:20:03.903308 systemd-networkd[873]: eth0: Gained carrier Sep 6 01:20:03.918081 systemd-networkd[873]: enP26380s1: Gained carrier Sep 6 01:20:03.924711 systemd-networkd[873]: eth0: DHCPv4 address 10.200.20.27/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 6 01:20:05.106875 systemd-networkd[873]: eth0: Gained IPv6LL Sep 6 01:20:06.038561 ignition[895]: Ignition 2.14.0 Sep 6 01:20:06.038573 ignition[895]: Stage: fetch-offline Sep 6 01:20:06.038626 ignition[895]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 01:20:06.038664 ignition[895]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 6 01:20:06.115031 ignition[895]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 6 01:20:06.115171 ignition[895]: parsed url from cmdline: "" Sep 6 01:20:06.115174 ignition[895]: no config URL provided Sep 6 01:20:06.115179 ignition[895]: reading system config file "/usr/lib/ignition/user.ign" Sep 6 01:20:06.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:06.121812 systemd[1]: Finished ignition-fetch-offline.service. Sep 6 01:20:06.162759 kernel: kauditd_printk_skb: 18 callbacks suppressed Sep 6 01:20:06.162782 kernel: audit: type=1130 audit(1757121606.130:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:06.115188 ignition[895]: no config at "/usr/lib/ignition/user.ign" Sep 6 01:20:06.140600 systemd[1]: Starting ignition-fetch.service... Sep 6 01:20:06.115194 ignition[895]: failed to fetch config: resource requires networking Sep 6 01:20:06.115660 ignition[895]: Ignition finished successfully Sep 6 01:20:06.147012 ignition[902]: Ignition 2.14.0 Sep 6 01:20:06.147018 ignition[902]: Stage: fetch Sep 6 01:20:06.147111 ignition[902]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 01:20:06.147128 ignition[902]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 6 01:20:06.149591 ignition[902]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 6 01:20:06.149726 ignition[902]: parsed url from cmdline: "" Sep 6 01:20:06.149729 ignition[902]: no config URL provided Sep 6 01:20:06.149733 ignition[902]: reading system config file "/usr/lib/ignition/user.ign" Sep 6 01:20:06.149740 ignition[902]: no config at "/usr/lib/ignition/user.ign" Sep 6 01:20:06.149765 ignition[902]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 6 01:20:06.293487 ignition[902]: GET result: OK Sep 6 01:20:06.293603 ignition[902]: config has been read from IMDS userdata Sep 6 01:20:06.293680 ignition[902]: parsing config with SHA512: 1ebe73e986c15ae9eff469e52771f310a26e067bb6f0a796e2466c268e3e345ceb0d09a0867f5b54fa2cb488123d2445c76edf98178bb627059c98c3a25c5d6c Sep 6 01:20:06.296774 unknown[902]: fetched base config from "system" Sep 6 01:20:06.326431 kernel: audit: type=1130 audit(1757121606.304:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:06.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:06.297309 ignition[902]: fetch: fetch complete Sep 6 01:20:06.296781 unknown[902]: fetched base config from "system" Sep 6 01:20:06.297315 ignition[902]: fetch: fetch passed Sep 6 01:20:06.296790 unknown[902]: fetched user config from "azure" Sep 6 01:20:06.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:06.297349 ignition[902]: Ignition finished successfully Sep 6 01:20:06.365578 kernel: audit: type=1130 audit(1757121606.346:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:06.298511 systemd[1]: Finished ignition-fetch.service. Sep 6 01:20:06.333739 ignition[908]: Ignition 2.14.0 Sep 6 01:20:06.305560 systemd[1]: Starting ignition-kargs.service... Sep 6 01:20:06.333745 ignition[908]: Stage: kargs Sep 6 01:20:06.342247 systemd[1]: Finished ignition-kargs.service. Sep 6 01:20:06.333842 ignition[908]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 01:20:06.365826 systemd[1]: Starting ignition-disks.service... Sep 6 01:20:06.333863 ignition[908]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 6 01:20:06.393980 systemd[1]: Finished ignition-disks.service. Sep 6 01:20:06.422720 kernel: audit: type=1130 audit(1757121606.400:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:06.400000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:06.336542 ignition[908]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 6 01:20:06.401215 systemd[1]: Reached target initrd-root-device.target. Sep 6 01:20:06.338693 ignition[908]: kargs: kargs passed Sep 6 01:20:06.425078 systemd[1]: Reached target local-fs-pre.target. Sep 6 01:20:06.338744 ignition[908]: Ignition finished successfully Sep 6 01:20:06.434941 systemd[1]: Reached target local-fs.target. Sep 6 01:20:06.378238 ignition[914]: Ignition 2.14.0 Sep 6 01:20:06.443212 systemd[1]: Reached target sysinit.target. Sep 6 01:20:06.378244 ignition[914]: Stage: disks Sep 6 01:20:06.450064 systemd[1]: Reached target basic.target. Sep 6 01:20:06.378350 ignition[914]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 01:20:06.459462 systemd[1]: Starting systemd-fsck-root.service... Sep 6 01:20:06.378367 ignition[914]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 6 01:20:06.385045 ignition[914]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 6 01:20:06.387798 ignition[914]: disks: disks passed Sep 6 01:20:06.387846 ignition[914]: Ignition finished successfully Sep 6 01:20:06.527460 systemd-fsck[922]: ROOT: clean, 629/7326000 files, 481083/7359488 blocks Sep 6 01:20:06.539026 systemd[1]: Finished systemd-fsck-root.service. Sep 6 01:20:06.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:06.544758 systemd[1]: Mounting sysroot.mount... Sep 6 01:20:06.575318 kernel: audit: type=1130 audit(1757121606.543:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:06.588667 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 6 01:20:06.589670 systemd[1]: Mounted sysroot.mount. Sep 6 01:20:06.593921 systemd[1]: Reached target initrd-root-fs.target. Sep 6 01:20:06.630613 systemd[1]: Mounting sysroot-usr.mount... Sep 6 01:20:06.635342 systemd[1]: Starting flatcar-metadata-hostname.service... Sep 6 01:20:06.647660 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 6 01:20:06.647707 systemd[1]: Reached target ignition-diskful.target. Sep 6 01:20:06.662447 systemd[1]: Mounted sysroot-usr.mount. Sep 6 01:20:06.717135 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 6 01:20:06.721995 systemd[1]: Starting initrd-setup-root.service... Sep 6 01:20:06.749587 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (932) Sep 6 01:20:06.749645 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 6 01:20:06.749663 initrd-setup-root[937]: cut: /sysroot/etc/passwd: No such file or directory Sep 6 01:20:06.765394 kernel: BTRFS info (device sda6): using free space tree Sep 6 01:20:06.765416 kernel: BTRFS info (device sda6): has skinny extents Sep 6 01:20:06.769782 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 6 01:20:06.780935 initrd-setup-root[963]: cut: /sysroot/etc/group: No such file or directory Sep 6 01:20:06.802701 initrd-setup-root[971]: cut: /sysroot/etc/shadow: No such file or directory Sep 6 01:20:06.812537 initrd-setup-root[979]: cut: /sysroot/etc/gshadow: No such file or directory Sep 6 01:20:07.206602 systemd[1]: Finished initrd-setup-root.service. Sep 6 01:20:07.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:07.211765 systemd[1]: Starting ignition-mount.service... Sep 6 01:20:07.237569 kernel: audit: type=1130 audit(1757121607.210:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:07.234623 systemd[1]: Starting sysroot-boot.service... Sep 6 01:20:07.243860 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Sep 6 01:20:07.243967 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Sep 6 01:20:07.266981 ignition[999]: INFO : Ignition 2.14.0 Sep 6 01:20:07.266981 ignition[999]: INFO : Stage: mount Sep 6 01:20:07.277947 ignition[999]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 01:20:07.277947 ignition[999]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 6 01:20:07.277947 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 6 01:20:07.277947 ignition[999]: INFO : mount: mount passed Sep 6 01:20:07.277947 ignition[999]: INFO : Ignition finished successfully Sep 6 01:20:07.332166 kernel: audit: type=1130 audit(1757121607.287:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:07.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:07.278833 systemd[1]: Finished ignition-mount.service. Sep 6 01:20:07.332970 systemd[1]: Finished sysroot-boot.service. Sep 6 01:20:07.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:07.358659 kernel: audit: type=1130 audit(1757121607.339:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:07.795979 coreos-metadata[931]: Sep 06 01:20:07.795 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 6 01:20:07.803999 coreos-metadata[931]: Sep 06 01:20:07.803 INFO Fetch successful Sep 6 01:20:07.837459 coreos-metadata[931]: Sep 06 01:20:07.837 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 6 01:20:07.850446 coreos-metadata[931]: Sep 06 01:20:07.850 INFO Fetch successful Sep 6 01:20:07.867583 coreos-metadata[931]: Sep 06 01:20:07.867 INFO wrote hostname ci-3510.3.8-n-34c19deec5 to /sysroot/etc/hostname Sep 6 01:20:07.876020 systemd[1]: Finished flatcar-metadata-hostname.service. Sep 6 01:20:07.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:07.900162 systemd[1]: Starting ignition-files.service... Sep 6 01:20:07.909021 kernel: audit: type=1130 audit(1757121607.880:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:07.909894 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 6 01:20:07.927657 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1010) Sep 6 01:20:07.938545 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 6 01:20:07.938560 kernel: BTRFS info (device sda6): using free space tree Sep 6 01:20:07.938576 kernel: BTRFS info (device sda6): has skinny extents Sep 6 01:20:07.947342 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 6 01:20:07.962789 ignition[1029]: INFO : Ignition 2.14.0 Sep 6 01:20:07.966725 ignition[1029]: INFO : Stage: files Sep 6 01:20:07.970129 ignition[1029]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 01:20:07.970129 ignition[1029]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 6 01:20:07.989135 ignition[1029]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 6 01:20:07.989135 ignition[1029]: DEBUG : files: compiled without relabeling support, skipping Sep 6 01:20:07.989135 ignition[1029]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 6 01:20:07.989135 ignition[1029]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 6 01:20:08.053803 ignition[1029]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 6 01:20:08.062305 ignition[1029]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 6 01:20:08.070837 ignition[1029]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 6 01:20:08.067798 unknown[1029]: wrote ssh authorized keys file for user: core Sep 6 01:20:08.086151 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 6 01:20:08.086151 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 6 01:20:08.086151 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 6 01:20:08.086151 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 6 01:20:08.133787 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 6 01:20:08.326041 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 6 01:20:08.326041 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 6 01:20:08.326041 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 6 01:20:08.326041 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 6 01:20:08.326041 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 6 01:20:08.326041 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 6 01:20:08.326041 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 6 01:20:08.326041 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 6 01:20:08.326041 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 6 01:20:08.420701 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 01:20:08.420701 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 01:20:08.420701 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 6 01:20:08.420701 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 6 01:20:08.420701 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Sep 6 01:20:08.420701 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition Sep 6 01:20:08.420701 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3464408465" Sep 6 01:20:08.420701 ignition[1029]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3464408465": device or resource busy Sep 6 01:20:08.420701 ignition[1029]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3464408465", trying btrfs: device or resource busy Sep 6 01:20:08.420701 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3464408465" Sep 6 01:20:08.420701 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3464408465" Sep 6 01:20:08.420701 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem3464408465" Sep 6 01:20:08.420701 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem3464408465" Sep 6 01:20:08.420701 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Sep 6 01:20:08.352004 systemd[1]: mnt-oem3464408465.mount: Deactivated successfully. Sep 6 01:20:08.577457 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Sep 6 01:20:08.577457 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(f): oem config not found in "/usr/share/oem", looking on oem partition Sep 6 01:20:08.577457 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(10): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2386815904" Sep 6 01:20:08.577457 ignition[1029]: CRITICAL : files: createFilesystemsFiles: createFiles: op(f): op(10): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2386815904": device or resource busy Sep 6 01:20:08.577457 ignition[1029]: ERROR : files: createFilesystemsFiles: createFiles: op(f): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2386815904", trying btrfs: device or resource busy Sep 6 01:20:08.577457 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2386815904" Sep 6 01:20:08.577457 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2386815904" Sep 6 01:20:08.577457 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [started] unmounting "/mnt/oem2386815904" Sep 6 01:20:08.577457 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [finished] unmounting "/mnt/oem2386815904" Sep 6 01:20:08.577457 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Sep 6 01:20:08.577457 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 6 01:20:08.577457 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 6 01:20:08.374416 systemd[1]: mnt-oem2386815904.mount: Deactivated successfully. Sep 6 01:20:08.914956 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET result: OK Sep 6 01:20:09.168955 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 6 01:20:09.182954 ignition[1029]: INFO : files: op(14): [started] processing unit "waagent.service" Sep 6 01:20:09.182954 ignition[1029]: INFO : files: op(14): [finished] processing unit "waagent.service" Sep 6 01:20:09.182954 ignition[1029]: INFO : files: op(15): [started] processing unit "nvidia.service" Sep 6 01:20:09.182954 ignition[1029]: INFO : files: op(15): [finished] processing unit "nvidia.service" Sep 6 01:20:09.182954 ignition[1029]: INFO : files: op(16): [started] processing unit "containerd.service" Sep 6 01:20:09.182954 ignition[1029]: INFO : files: op(16): op(17): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 6 01:20:09.182954 ignition[1029]: INFO : files: op(16): op(17): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 6 01:20:09.182954 ignition[1029]: INFO : files: op(16): [finished] processing unit "containerd.service" Sep 6 01:20:09.182954 ignition[1029]: INFO : files: op(18): [started] processing unit "prepare-helm.service" Sep 6 01:20:09.182954 ignition[1029]: INFO : files: op(18): op(19): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 6 01:20:09.182954 ignition[1029]: INFO : files: op(18): op(19): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 6 01:20:09.182954 ignition[1029]: INFO : files: op(18): [finished] processing unit "prepare-helm.service" Sep 6 01:20:09.182954 ignition[1029]: INFO : files: op(1a): [started] setting preset to enabled for "prepare-helm.service" Sep 6 01:20:09.182954 ignition[1029]: INFO : files: op(1a): [finished] setting preset to enabled for "prepare-helm.service" Sep 6 01:20:09.182954 ignition[1029]: INFO : files: op(1b): [started] setting preset to enabled for "waagent.service" Sep 6 01:20:09.182954 ignition[1029]: INFO : files: op(1b): [finished] setting preset to enabled for "waagent.service" Sep 6 01:20:09.182954 ignition[1029]: INFO : files: op(1c): [started] setting preset to enabled for "nvidia.service" Sep 6 01:20:09.182954 ignition[1029]: INFO : files: op(1c): [finished] setting preset to enabled for "nvidia.service" Sep 6 01:20:09.182954 ignition[1029]: INFO : files: createResultFile: createFiles: op(1d): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 6 01:20:09.182954 ignition[1029]: INFO : files: createResultFile: createFiles: op(1d): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 6 01:20:09.182954 ignition[1029]: INFO : files: files passed Sep 6 01:20:09.182954 ignition[1029]: INFO : Ignition finished successfully Sep 6 01:20:09.424016 kernel: audit: type=1130 audit(1757121609.186:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:09.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:09.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:09.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:09.264000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:09.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:09.334000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:09.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:09.182892 systemd[1]: Finished ignition-files.service. Sep 6 01:20:09.190103 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 6 01:20:09.212945 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 6 01:20:09.450918 initrd-setup-root-after-ignition[1054]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 6 01:20:09.220256 systemd[1]: Starting ignition-quench.service... Sep 6 01:20:09.239114 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 6 01:20:09.248990 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 6 01:20:09.483000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:09.249070 systemd[1]: Finished ignition-quench.service. Sep 6 01:20:09.264971 systemd[1]: Reached target ignition-complete.target. Sep 6 01:20:09.281892 systemd[1]: Starting initrd-parse-etc.service... Sep 6 01:20:09.325337 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 6 01:20:09.325446 systemd[1]: Finished initrd-parse-etc.service. Sep 6 01:20:09.334760 systemd[1]: Reached target initrd-fs.target. Sep 6 01:20:09.345895 systemd[1]: Reached target initrd.target. Sep 6 01:20:09.357230 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 6 01:20:09.358054 systemd[1]: Starting dracut-pre-pivot.service... Sep 6 01:20:09.411397 systemd[1]: Finished dracut-pre-pivot.service. Sep 6 01:20:09.420092 systemd[1]: Starting initrd-cleanup.service... Sep 6 01:20:09.445011 systemd[1]: Stopped target nss-lookup.target. Sep 6 01:20:09.583000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:09.455224 systemd[1]: Stopped target remote-cryptsetup.target. Sep 6 01:20:09.467419 systemd[1]: Stopped target timers.target. Sep 6 01:20:09.602000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:09.474965 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 6 01:20:09.611000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:09.475112 systemd[1]: Stopped dracut-pre-pivot.service. Sep 6 01:20:09.620000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:09.483987 systemd[1]: Stopped target initrd.target. Sep 6 01:20:09.629000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:09.492627 systemd[1]: Stopped target basic.target. Sep 6 01:20:09.500030 systemd[1]: Stopped target ignition-complete.target. Sep 6 01:20:09.650000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:09.655399 iscsid[880]: iscsid shutting down. Sep 6 01:20:09.508893 systemd[1]: Stopped target ignition-diskful.target. Sep 6 01:20:09.673446 ignition[1067]: INFO : Ignition 2.14.0 Sep 6 01:20:09.673446 ignition[1067]: INFO : Stage: umount Sep 6 01:20:09.673446 ignition[1067]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 01:20:09.673446 ignition[1067]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 6 01:20:09.673446 ignition[1067]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 6 01:20:09.673446 ignition[1067]: INFO : umount: umount passed Sep 6 01:20:09.673446 ignition[1067]: INFO : Ignition finished successfully Sep 6 01:20:09.677000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:09.685000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:09.694000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:09.704000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:09.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:09.722000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:09.733000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:09.737000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:09.745000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:09.517183 systemd[1]: Stopped target initrd-root-device.target. Sep 6 01:20:09.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:09.526461 systemd[1]: Stopped target remote-fs.target. Sep 6 01:20:09.534383 systemd[1]: Stopped target remote-fs-pre.target. Sep 6 01:20:09.542120 systemd[1]: Stopped target sysinit.target. Sep 6 01:20:09.549625 systemd[1]: Stopped target local-fs.target. Sep 6 01:20:09.560132 systemd[1]: Stopped target local-fs-pre.target. Sep 6 01:20:09.568235 systemd[1]: Stopped target swap.target. Sep 6 01:20:09.575730 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 6 01:20:09.575879 systemd[1]: Stopped dracut-pre-mount.service. Sep 6 01:20:09.815000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:09.584157 systemd[1]: Stopped target cryptsetup.target. Sep 6 01:20:09.592026 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 6 01:20:09.830000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:09.592163 systemd[1]: Stopped dracut-initqueue.service. Sep 6 01:20:09.602729 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 6 01:20:09.602872 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 6 01:20:09.611452 systemd[1]: ignition-files.service: Deactivated successfully. Sep 6 01:20:09.611575 systemd[1]: Stopped ignition-files.service. Sep 6 01:20:09.620683 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 6 01:20:09.878000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:09.620815 systemd[1]: Stopped flatcar-metadata-hostname.service. Sep 6 01:20:09.630563 systemd[1]: Stopping ignition-mount.service... Sep 6 01:20:09.639504 systemd[1]: Stopping iscsid.service... Sep 6 01:20:09.646226 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 6 01:20:09.908000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:09.646456 systemd[1]: Stopped kmod-static-nodes.service. Sep 6 01:20:09.917000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:09.652317 systemd[1]: Stopping sysroot-boot.service... Sep 6 01:20:09.927000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:09.669537 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 6 01:20:09.669788 systemd[1]: Stopped systemd-udev-trigger.service. Sep 6 01:20:09.678273 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 6 01:20:09.951000 audit: BPF prog-id=6 op=UNLOAD Sep 6 01:20:09.951000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:09.678407 systemd[1]: Stopped dracut-pre-trigger.service. Sep 6 01:20:09.961000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:09.688889 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 6 01:20:09.689673 systemd[1]: iscsid.service: Deactivated successfully. Sep 6 01:20:09.985000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:09.689775 systemd[1]: Stopped iscsid.service. Sep 6 01:20:09.993000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:09.694881 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 6 01:20:10.000000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:09.694970 systemd[1]: Stopped ignition-mount.service. Sep 6 01:20:09.706257 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 6 01:20:09.706338 systemd[1]: Finished initrd-cleanup.service. Sep 6 01:20:09.724428 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 6 01:20:10.026000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:09.724483 systemd[1]: Stopped ignition-disks.service. Sep 6 01:20:10.034000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:09.733777 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 6 01:20:10.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:10.043000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:09.733817 systemd[1]: Stopped ignition-kargs.service. Sep 6 01:20:10.052000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:09.738157 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 6 01:20:09.738191 systemd[1]: Stopped ignition-fetch.service. Sep 6 01:20:09.746439 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 6 01:20:10.078994 kernel: hv_netvsc 000d3afd-1fc0-000d-3afd-1fc0000d3afd eth0: Data path switched from VF: enP26380s1 Sep 6 01:20:09.746480 systemd[1]: Stopped ignition-fetch-offline.service. Sep 6 01:20:09.755438 systemd[1]: Stopped target paths.target. Sep 6 01:20:09.759452 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 6 01:20:09.768703 systemd[1]: Stopped systemd-ask-password-console.path. Sep 6 01:20:09.777787 systemd[1]: Stopped target slices.target. Sep 6 01:20:09.787681 systemd[1]: Stopped target sockets.target. Sep 6 01:20:09.799054 systemd[1]: iscsid.socket: Deactivated successfully. Sep 6 01:20:09.799106 systemd[1]: Closed iscsid.socket. Sep 6 01:20:09.806636 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 6 01:20:09.806689 systemd[1]: Stopped ignition-setup.service. Sep 6 01:20:09.815908 systemd[1]: Stopping iscsiuio.service... Sep 6 01:20:09.823612 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 6 01:20:09.823728 systemd[1]: Stopped iscsiuio.service. Sep 6 01:20:09.831436 systemd[1]: Stopped target network.target. Sep 6 01:20:09.839942 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 6 01:20:09.839974 systemd[1]: Closed iscsiuio.socket. Sep 6 01:20:09.847223 systemd[1]: Stopping systemd-networkd.service... Sep 6 01:20:09.856188 systemd[1]: Stopping systemd-resolved.service... Sep 6 01:20:09.860676 systemd-networkd[873]: eth0: DHCPv6 lease lost Sep 6 01:20:10.150000 audit: BPF prog-id=9 op=UNLOAD Sep 6 01:20:09.870618 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 6 01:20:09.870748 systemd[1]: Stopped systemd-networkd.service. Sep 6 01:20:09.879591 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 6 01:20:09.879647 systemd[1]: Closed systemd-networkd.socket. Sep 6 01:20:10.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:09.893029 systemd[1]: Stopping network-cleanup.service... Sep 6 01:20:09.901008 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 6 01:20:09.901075 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 6 01:20:09.908586 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 6 01:20:09.908632 systemd[1]: Stopped systemd-sysctl.service. Sep 6 01:20:09.921463 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 6 01:20:09.921545 systemd[1]: Stopped systemd-modules-load.service. Sep 6 01:20:09.932003 systemd[1]: Stopping systemd-udevd.service... Sep 6 01:20:09.939919 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 6 01:20:09.940390 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 6 01:20:09.940492 systemd[1]: Stopped systemd-resolved.service. Sep 6 01:20:09.952393 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 6 01:20:10.238000 audit: BPF prog-id=5 op=UNLOAD Sep 6 01:20:10.238000 audit: BPF prog-id=4 op=UNLOAD Sep 6 01:20:10.238000 audit: BPF prog-id=3 op=UNLOAD Sep 6 01:20:10.238000 audit: BPF prog-id=8 op=UNLOAD Sep 6 01:20:10.238000 audit: BPF prog-id=7 op=UNLOAD Sep 6 01:20:09.952518 systemd[1]: Stopped systemd-udevd.service. Sep 6 01:20:09.962978 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 6 01:20:09.963016 systemd[1]: Closed systemd-udevd-control.socket. Sep 6 01:20:09.967720 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 6 01:20:09.967756 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 6 01:20:10.272019 systemd-journald[276]: Received SIGTERM from PID 1 (systemd). Sep 6 01:20:09.977032 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 6 01:20:09.977083 systemd[1]: Stopped dracut-pre-udev.service. Sep 6 01:20:09.985430 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 6 01:20:09.985472 systemd[1]: Stopped dracut-cmdline.service. Sep 6 01:20:09.993794 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 6 01:20:09.993834 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 6 01:20:10.001722 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 6 01:20:10.018239 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 6 01:20:10.018326 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 6 01:20:10.027071 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 6 01:20:10.027205 systemd[1]: Stopped sysroot-boot.service. Sep 6 01:20:10.035491 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 6 01:20:10.035592 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 6 01:20:10.043712 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 6 01:20:10.043769 systemd[1]: Stopped initrd-setup-root.service. Sep 6 01:20:10.169213 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 6 01:20:10.169326 systemd[1]: Stopped network-cleanup.service. Sep 6 01:20:10.173665 systemd[1]: Reached target initrd-switch-root.target. Sep 6 01:20:10.185563 systemd[1]: Starting initrd-switch-root.service... Sep 6 01:20:10.236547 systemd[1]: Switching root. Sep 6 01:20:10.272567 systemd-journald[276]: Journal stopped Sep 6 01:20:20.562014 kernel: SELinux: Class mctp_socket not defined in policy. Sep 6 01:20:20.562034 kernel: SELinux: Class anon_inode not defined in policy. Sep 6 01:20:20.562044 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 6 01:20:20.562054 kernel: SELinux: policy capability network_peer_controls=1 Sep 6 01:20:20.562064 kernel: SELinux: policy capability open_perms=1 Sep 6 01:20:20.562072 kernel: SELinux: policy capability extended_socket_class=1 Sep 6 01:20:20.562081 kernel: SELinux: policy capability always_check_network=0 Sep 6 01:20:20.562090 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 6 01:20:20.562098 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 6 01:20:20.562106 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 6 01:20:20.562114 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 6 01:20:20.562123 kernel: kauditd_printk_skb: 48 callbacks suppressed Sep 6 01:20:20.562131 kernel: audit: type=1403 audit(1757121612.930:87): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 6 01:20:20.562141 systemd[1]: Successfully loaded SELinux policy in 254.729ms. Sep 6 01:20:20.562152 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.738ms. Sep 6 01:20:20.562164 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 6 01:20:20.562173 systemd[1]: Detected virtualization microsoft. Sep 6 01:20:20.562181 systemd[1]: Detected architecture arm64. Sep 6 01:20:20.562190 systemd[1]: Detected first boot. Sep 6 01:20:20.562199 systemd[1]: Hostname set to . Sep 6 01:20:20.562208 systemd[1]: Initializing machine ID from random generator. Sep 6 01:20:20.562217 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 6 01:20:20.562228 kernel: audit: type=1400 audit(1757121614.594:88): avc: denied { associate } for pid=1118 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Sep 6 01:20:20.562238 kernel: audit: type=1300 audit(1757121614.594:88): arch=c00000b7 syscall=5 success=yes exit=0 a0=40000224ac a1=40000285b8 a2=40000265c0 a3=32 items=0 ppid=1101 pid=1118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:20:20.562247 kernel: audit: type=1327 audit(1757121614.594:88): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 6 01:20:20.562257 kernel: audit: type=1400 audit(1757121614.604:89): avc: denied { associate } for pid=1118 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Sep 6 01:20:20.562267 kernel: audit: type=1300 audit(1757121614.604:89): arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=4000022589 a2=1ed a3=0 items=2 ppid=1101 pid=1118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:20:20.562277 kernel: audit: type=1307 audit(1757121614.604:89): cwd="/" Sep 6 01:20:20.562286 kernel: audit: type=1302 audit(1757121614.604:89): item=0 name=(null) inode=2 dev=00:2a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:20:20.562295 kernel: audit: type=1302 audit(1757121614.604:89): item=1 name=(null) inode=3 dev=00:2a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:20:20.562305 kernel: audit: type=1327 audit(1757121614.604:89): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 6 01:20:20.562314 systemd[1]: Populated /etc with preset unit settings. Sep 6 01:20:20.562323 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 01:20:20.562332 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 01:20:20.562343 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 01:20:20.562352 systemd[1]: Queued start job for default target multi-user.target. Sep 6 01:20:20.562361 systemd[1]: Unnecessary job was removed for dev-sda6.device. Sep 6 01:20:20.562371 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 6 01:20:20.562380 systemd[1]: Created slice system-addon\x2drun.slice. Sep 6 01:20:20.562389 systemd[1]: Created slice system-getty.slice. Sep 6 01:20:20.562400 systemd[1]: Created slice system-modprobe.slice. Sep 6 01:20:20.562410 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 6 01:20:20.562420 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 6 01:20:20.562429 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 6 01:20:20.562439 systemd[1]: Created slice user.slice. Sep 6 01:20:20.562448 systemd[1]: Started systemd-ask-password-console.path. Sep 6 01:20:20.562458 systemd[1]: Started systemd-ask-password-wall.path. Sep 6 01:20:20.562467 systemd[1]: Set up automount boot.automount. Sep 6 01:20:20.562476 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 6 01:20:20.562486 systemd[1]: Reached target integritysetup.target. Sep 6 01:20:20.562496 systemd[1]: Reached target remote-cryptsetup.target. Sep 6 01:20:20.562505 systemd[1]: Reached target remote-fs.target. Sep 6 01:20:20.562515 systemd[1]: Reached target slices.target. Sep 6 01:20:20.562524 systemd[1]: Reached target swap.target. Sep 6 01:20:20.562533 systemd[1]: Reached target torcx.target. Sep 6 01:20:20.562542 systemd[1]: Reached target veritysetup.target. Sep 6 01:20:20.562551 systemd[1]: Listening on systemd-coredump.socket. Sep 6 01:20:20.562560 systemd[1]: Listening on systemd-initctl.socket. Sep 6 01:20:20.562571 kernel: audit: type=1400 audit(1757121620.144:90): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 6 01:20:20.562580 systemd[1]: Listening on systemd-journald-audit.socket. Sep 6 01:20:20.562589 kernel: audit: type=1335 audit(1757121620.150:91): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Sep 6 01:20:20.562598 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 6 01:20:20.562607 systemd[1]: Listening on systemd-journald.socket. Sep 6 01:20:20.562617 systemd[1]: Listening on systemd-networkd.socket. Sep 6 01:20:20.562626 systemd[1]: Listening on systemd-udevd-control.socket. Sep 6 01:20:20.562645 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 6 01:20:20.562656 systemd[1]: Listening on systemd-userdbd.socket. Sep 6 01:20:20.562666 systemd[1]: Mounting dev-hugepages.mount... Sep 6 01:20:20.562675 systemd[1]: Mounting dev-mqueue.mount... Sep 6 01:20:20.562685 systemd[1]: Mounting media.mount... Sep 6 01:20:20.562694 systemd[1]: Mounting sys-kernel-debug.mount... Sep 6 01:20:20.562705 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 6 01:20:20.562714 systemd[1]: Mounting tmp.mount... Sep 6 01:20:20.562723 systemd[1]: Starting flatcar-tmpfiles.service... Sep 6 01:20:20.562733 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 01:20:20.562742 systemd[1]: Starting kmod-static-nodes.service... Sep 6 01:20:20.562751 systemd[1]: Starting modprobe@configfs.service... Sep 6 01:20:20.562760 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 01:20:20.562770 systemd[1]: Starting modprobe@drm.service... Sep 6 01:20:20.562779 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 01:20:20.562789 systemd[1]: Starting modprobe@fuse.service... Sep 6 01:20:20.562798 systemd[1]: Starting modprobe@loop.service... Sep 6 01:20:20.562808 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 6 01:20:20.562818 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 6 01:20:20.562827 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Sep 6 01:20:20.562836 kernel: loop: module loaded Sep 6 01:20:20.562845 systemd[1]: Starting systemd-journald.service... Sep 6 01:20:20.562854 systemd[1]: Starting systemd-modules-load.service... Sep 6 01:20:20.562864 kernel: fuse: init (API version 7.34) Sep 6 01:20:20.562874 systemd[1]: Starting systemd-network-generator.service... Sep 6 01:20:20.562884 systemd[1]: Starting systemd-remount-fs.service... Sep 6 01:20:20.562893 systemd[1]: Starting systemd-udev-trigger.service... Sep 6 01:20:20.562902 systemd[1]: Mounted dev-hugepages.mount. Sep 6 01:20:20.562911 systemd[1]: Mounted dev-mqueue.mount. Sep 6 01:20:20.562920 systemd[1]: Mounted media.mount. Sep 6 01:20:20.562930 systemd[1]: Mounted sys-kernel-debug.mount. Sep 6 01:20:20.562939 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 6 01:20:20.562948 systemd[1]: Mounted tmp.mount. Sep 6 01:20:20.562958 kernel: audit: type=1305 audit(1757121620.559:92): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 6 01:20:20.562971 systemd-journald[1234]: Journal started Sep 6 01:20:20.563008 systemd-journald[1234]: Runtime Journal (/run/log/journal/564a9b82c05946e580326efb45feb30a) is 8.0M, max 78.5M, 70.5M free. Sep 6 01:20:20.150000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Sep 6 01:20:20.559000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 6 01:20:20.581626 systemd[1]: Started systemd-journald.service. Sep 6 01:20:20.581694 kernel: audit: type=1300 audit(1757121620.559:92): arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffe68b7c10 a2=4000 a3=1 items=0 ppid=1 pid=1234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:20:20.559000 audit[1234]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffe68b7c10 a2=4000 a3=1 items=0 ppid=1 pid=1234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:20:20.559000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 6 01:20:20.619951 systemd[1]: Finished flatcar-tmpfiles.service. Sep 6 01:20:20.620162 kernel: audit: type=1327 audit(1757121620.559:92): proctitle="/usr/lib/systemd/systemd-journald" Sep 6 01:20:20.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:20.631878 systemd[1]: Finished kmod-static-nodes.service. Sep 6 01:20:20.642229 kernel: audit: type=1130 audit(1757121620.618:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:20.662656 kernel: audit: type=1130 audit(1757121620.624:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:20.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:20.674303 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 6 01:20:20.674475 systemd[1]: Finished modprobe@configfs.service. Sep 6 01:20:20.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:20.697961 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 01:20:20.698123 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 01:20:20.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:20.699686 kernel: audit: type=1130 audit(1757121620.673:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:20.699708 kernel: audit: type=1130 audit(1757121620.676:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:20.699720 kernel: audit: type=1131 audit(1757121620.676:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:20.676000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:20.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:20.741000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:20.742239 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 01:20:20.742473 systemd[1]: Finished modprobe@drm.service. Sep 6 01:20:20.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:20.745000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:20.746943 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 01:20:20.747138 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 01:20:20.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:20.751000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:20.752227 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 6 01:20:20.752425 systemd[1]: Finished modprobe@fuse.service. Sep 6 01:20:20.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:20.756000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:20.757148 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 01:20:20.757395 systemd[1]: Finished modprobe@loop.service. Sep 6 01:20:20.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:20.762000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:20.763203 systemd[1]: Finished systemd-modules-load.service. Sep 6 01:20:20.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:20.769011 systemd[1]: Finished systemd-network-generator.service. Sep 6 01:20:20.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:20.774603 systemd[1]: Finished systemd-remount-fs.service. Sep 6 01:20:20.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:20.780151 systemd[1]: Finished systemd-udev-trigger.service. Sep 6 01:20:20.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:20.785196 systemd[1]: Reached target network-pre.target. Sep 6 01:20:20.790938 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 6 01:20:20.796994 systemd[1]: Mounting sys-kernel-config.mount... Sep 6 01:20:20.801264 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 6 01:20:20.803041 systemd[1]: Starting systemd-hwdb-update.service... Sep 6 01:20:20.808441 systemd[1]: Starting systemd-journal-flush.service... Sep 6 01:20:20.815786 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 01:20:20.817072 systemd[1]: Starting systemd-random-seed.service... Sep 6 01:20:20.821703 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 01:20:20.822752 systemd[1]: Starting systemd-sysctl.service... Sep 6 01:20:20.827623 systemd[1]: Starting systemd-sysusers.service... Sep 6 01:20:20.832562 systemd[1]: Starting systemd-udev-settle.service... Sep 6 01:20:20.838377 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 6 01:20:20.843244 systemd[1]: Mounted sys-kernel-config.mount. Sep 6 01:20:20.851965 udevadm[1270]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 6 01:20:20.874183 systemd[1]: Finished systemd-random-seed.service. Sep 6 01:20:20.879230 systemd[1]: Reached target first-boot-complete.target. Sep 6 01:20:20.879488 systemd-journald[1234]: Time spent on flushing to /var/log/journal/564a9b82c05946e580326efb45feb30a is 12.976ms for 1026 entries. Sep 6 01:20:20.879488 systemd-journald[1234]: System Journal (/var/log/journal/564a9b82c05946e580326efb45feb30a) is 8.0M, max 2.6G, 2.6G free. Sep 6 01:20:20.957956 systemd-journald[1234]: Received client request to flush runtime journal. Sep 6 01:20:20.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:20.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:20.926768 systemd[1]: Finished systemd-sysctl.service. Sep 6 01:20:20.958997 systemd[1]: Finished systemd-journal-flush.service. Sep 6 01:20:20.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:21.370082 systemd[1]: Finished systemd-sysusers.service. Sep 6 01:20:21.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:21.376264 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 6 01:20:21.725266 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 6 01:20:21.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:21.786999 systemd[1]: Finished systemd-hwdb-update.service. Sep 6 01:20:21.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:21.793246 systemd[1]: Starting systemd-udevd.service... Sep 6 01:20:21.811567 systemd-udevd[1281]: Using default interface naming scheme 'v252'. Sep 6 01:20:21.940807 systemd[1]: Started systemd-udevd.service. Sep 6 01:20:21.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:21.961110 systemd[1]: Starting systemd-networkd.service... Sep 6 01:20:21.981787 systemd[1]: Found device dev-ttyAMA0.device. Sep 6 01:20:22.016546 systemd[1]: Starting systemd-userdbd.service... Sep 6 01:20:22.064000 audit[1284]: AVC avc: denied { confidentiality } for pid=1284 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Sep 6 01:20:22.083662 kernel: mousedev: PS/2 mouse device common for all mice Sep 6 01:20:22.083757 kernel: hv_vmbus: registering driver hv_balloon Sep 6 01:20:22.083775 kernel: hv_vmbus: registering driver hyperv_fb Sep 6 01:20:22.093362 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Sep 6 01:20:22.098654 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Sep 6 01:20:22.098744 kernel: hv_balloon: Memory hot add disabled on ARM64 Sep 6 01:20:22.117591 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Sep 6 01:20:22.124774 kernel: hv_utils: Registering HyperV Utility Driver Sep 6 01:20:22.124845 kernel: Console: switching to colour dummy device 80x25 Sep 6 01:20:22.132402 kernel: hv_vmbus: registering driver hv_utils Sep 6 01:20:22.143625 kernel: hv_utils: Heartbeat IC version 3.0 Sep 6 01:20:22.143734 kernel: hv_utils: Shutdown IC version 3.2 Sep 6 01:20:22.143752 kernel: Console: switching to colour frame buffer device 128x48 Sep 6 01:20:22.133883 systemd[1]: Started systemd-userdbd.service. Sep 6 01:20:22.144656 kernel: hv_utils: TimeSync IC version 4.0 Sep 6 01:20:22.064000 audit[1284]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=aaaae97dbe30 a1=aa2c a2=ffffb03d24b0 a3=aaaae973c010 items=12 ppid=1281 pid=1284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:20:22.194551 systemd-journald[1234]: Time jumped backwards, rotating. Sep 6 01:20:22.064000 audit: CWD cwd="/" Sep 6 01:20:22.064000 audit: PATH item=0 name=(null) inode=5658 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:20:22.064000 audit: PATH item=1 name=(null) inode=9869 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:20:22.064000 audit: PATH item=2 name=(null) inode=9869 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:20:22.064000 audit: PATH item=3 name=(null) inode=9870 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:20:22.064000 audit: PATH item=4 name=(null) inode=9869 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:20:22.064000 audit: PATH item=5 name=(null) inode=9871 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:20:22.064000 audit: PATH item=6 name=(null) inode=9869 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:20:22.064000 audit: PATH item=7 name=(null) inode=9872 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:20:22.064000 audit: PATH item=8 name=(null) inode=9869 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:20:22.064000 audit: PATH item=9 name=(null) inode=9873 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:20:22.064000 audit: PATH item=10 name=(null) inode=9869 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:20:22.064000 audit: PATH item=11 name=(null) inode=9874 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:20:22.064000 audit: PROCTITLE proctitle="(udev-worker)" Sep 6 01:20:22.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:22.314042 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 6 01:20:22.319867 systemd[1]: Finished systemd-udev-settle.service. Sep 6 01:20:22.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:22.325558 systemd[1]: Starting lvm2-activation-early.service... Sep 6 01:20:22.379482 systemd-networkd[1302]: lo: Link UP Sep 6 01:20:22.379753 systemd-networkd[1302]: lo: Gained carrier Sep 6 01:20:22.380223 systemd-networkd[1302]: Enumeration completed Sep 6 01:20:22.380430 systemd[1]: Started systemd-networkd.service. Sep 6 01:20:22.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:22.386009 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 6 01:20:22.407043 systemd-networkd[1302]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 01:20:22.461262 kernel: mlx5_core 670c:00:02.0 enP26380s1: Link up Sep 6 01:20:22.505252 kernel: hv_netvsc 000d3afd-1fc0-000d-3afd-1fc0000d3afd eth0: Data path switched to VF: enP26380s1 Sep 6 01:20:22.505805 systemd-networkd[1302]: enP26380s1: Link UP Sep 6 01:20:22.505899 systemd-networkd[1302]: eth0: Link UP Sep 6 01:20:22.505906 systemd-networkd[1302]: eth0: Gained carrier Sep 6 01:20:22.512596 systemd-networkd[1302]: enP26380s1: Gained carrier Sep 6 01:20:22.521351 systemd-networkd[1302]: eth0: DHCPv4 address 10.200.20.27/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 6 01:20:22.625949 lvm[1360]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 6 01:20:22.701191 systemd[1]: Finished lvm2-activation-early.service. Sep 6 01:20:22.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:22.706319 systemd[1]: Reached target cryptsetup.target. Sep 6 01:20:22.711657 systemd[1]: Starting lvm2-activation.service... Sep 6 01:20:22.715960 lvm[1363]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 6 01:20:22.736263 systemd[1]: Finished lvm2-activation.service. Sep 6 01:20:22.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:22.740796 systemd[1]: Reached target local-fs-pre.target. Sep 6 01:20:22.745128 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 6 01:20:22.745154 systemd[1]: Reached target local-fs.target. Sep 6 01:20:22.749291 systemd[1]: Reached target machines.target. Sep 6 01:20:22.754746 systemd[1]: Starting ldconfig.service... Sep 6 01:20:22.758321 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 01:20:22.758392 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 01:20:22.759592 systemd[1]: Starting systemd-boot-update.service... Sep 6 01:20:22.764783 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 6 01:20:22.771257 systemd[1]: Starting systemd-machine-id-commit.service... Sep 6 01:20:22.776920 systemd[1]: Starting systemd-sysext.service... Sep 6 01:20:22.802835 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1366 (bootctl) Sep 6 01:20:22.804259 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 6 01:20:23.117641 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 6 01:20:23.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:23.128696 systemd[1]: Unmounting usr-share-oem.mount... Sep 6 01:20:23.133935 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 6 01:20:23.134191 systemd[1]: Unmounted usr-share-oem.mount. Sep 6 01:20:23.200262 kernel: loop0: detected capacity change from 0 to 203944 Sep 6 01:20:23.252169 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 6 01:20:23.253351 systemd[1]: Finished systemd-machine-id-commit.service. Sep 6 01:20:23.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:23.266305 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 6 01:20:23.288273 kernel: loop1: detected capacity change from 0 to 203944 Sep 6 01:20:23.295308 (sd-sysext)[1382]: Using extensions 'kubernetes'. Sep 6 01:20:23.295668 (sd-sysext)[1382]: Merged extensions into '/usr'. Sep 6 01:20:23.315454 systemd[1]: Mounting usr-share-oem.mount... Sep 6 01:20:23.319537 systemd-fsck[1374]: fsck.fat 4.2 (2021-01-31) Sep 6 01:20:23.319537 systemd-fsck[1374]: /dev/sda1: 236 files, 117310/258078 clusters Sep 6 01:20:23.321430 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 01:20:23.324698 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 01:20:23.333858 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 01:20:23.341961 systemd[1]: Starting modprobe@loop.service... Sep 6 01:20:23.349559 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 01:20:23.349709 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 01:20:23.352599 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 6 01:20:23.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:23.360035 systemd[1]: Mounted usr-share-oem.mount. Sep 6 01:20:23.364891 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 01:20:23.365064 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 01:20:23.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:23.369000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:23.370154 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 01:20:23.370368 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 01:20:23.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:23.375000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:23.376162 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 01:20:23.376402 systemd[1]: Finished modprobe@loop.service. Sep 6 01:20:23.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:23.380000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:23.385106 systemd[1]: Mounting boot.mount... Sep 6 01:20:23.388724 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 01:20:23.388803 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 01:20:23.389272 systemd[1]: Finished systemd-sysext.service. Sep 6 01:20:23.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:23.396454 systemd[1]: Starting ensure-sysext.service... Sep 6 01:20:23.401763 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 6 01:20:23.409393 systemd[1]: Mounted boot.mount. Sep 6 01:20:23.416147 systemd[1]: Reloading. Sep 6 01:20:23.430769 systemd-tmpfiles[1402]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 6 01:20:23.444340 systemd-tmpfiles[1402]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 6 01:20:23.459316 systemd-tmpfiles[1402]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 6 01:20:23.468501 /usr/lib/systemd/system-generators/torcx-generator[1423]: time="2025-09-06T01:20:23Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 01:20:23.468825 /usr/lib/systemd/system-generators/torcx-generator[1423]: time="2025-09-06T01:20:23Z" level=info msg="torcx already run" Sep 6 01:20:23.555449 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 01:20:23.555468 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 01:20:23.571378 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 01:20:23.639447 systemd[1]: Finished systemd-boot-update.service. Sep 6 01:20:23.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:23.653028 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 01:20:23.654746 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 01:20:23.660432 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 01:20:23.665931 systemd[1]: Starting modprobe@loop.service... Sep 6 01:20:23.669874 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 01:20:23.669996 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 01:20:23.670775 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 01:20:23.670943 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 01:20:23.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:23.675000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:23.675877 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 01:20:23.676033 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 01:20:23.679000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:23.679000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:23.680979 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 01:20:23.681193 systemd[1]: Finished modprobe@loop.service. Sep 6 01:20:23.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:23.685000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:23.688763 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 01:20:23.690162 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 01:20:23.695542 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 01:20:23.701265 systemd[1]: Starting modprobe@loop.service... Sep 6 01:20:23.705287 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 01:20:23.705430 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 01:20:23.706256 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 01:20:23.706435 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 01:20:23.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:23.710000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:23.711634 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 01:20:23.711797 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 01:20:23.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:23.715000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:23.717285 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 01:20:23.717488 systemd[1]: Finished modprobe@loop.service. Sep 6 01:20:23.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:23.721000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:23.724550 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 01:20:23.725822 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 01:20:23.730828 systemd[1]: Starting modprobe@drm.service... Sep 6 01:20:23.736041 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 01:20:23.741298 systemd[1]: Starting modprobe@loop.service... Sep 6 01:20:23.744984 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 01:20:23.745110 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 01:20:23.746174 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 01:20:23.746366 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 01:20:23.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:23.750000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:23.751295 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 01:20:23.751461 systemd[1]: Finished modprobe@drm.service. Sep 6 01:20:23.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:23.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:23.756116 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 01:20:23.756384 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 01:20:23.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:23.760000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:23.761585 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 01:20:23.761766 systemd[1]: Finished modprobe@loop.service. Sep 6 01:20:23.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:23.765000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:23.766948 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 01:20:23.767057 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 01:20:23.768401 systemd[1]: Finished ensure-sysext.service. Sep 6 01:20:23.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:23.884765 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 6 01:20:23.889000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:23.891883 systemd[1]: Starting audit-rules.service... Sep 6 01:20:23.896828 systemd[1]: Starting clean-ca-certificates.service... Sep 6 01:20:23.902339 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 6 01:20:23.909559 systemd[1]: Starting systemd-resolved.service... Sep 6 01:20:23.915296 systemd[1]: Starting systemd-timesyncd.service... Sep 6 01:20:23.921635 systemd[1]: Starting systemd-update-utmp.service... Sep 6 01:20:23.926876 systemd[1]: Finished clean-ca-certificates.service. Sep 6 01:20:23.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:23.932038 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 01:20:23.955000 audit[1523]: SYSTEM_BOOT pid=1523 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 6 01:20:23.959881 systemd[1]: Finished systemd-update-utmp.service. Sep 6 01:20:23.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:24.020054 systemd[1]: Started systemd-timesyncd.service. Sep 6 01:20:24.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:24.024841 systemd[1]: Reached target time-set.target. Sep 6 01:20:24.077680 systemd-resolved[1520]: Positive Trust Anchors: Sep 6 01:20:24.077695 systemd-resolved[1520]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 01:20:24.077723 systemd-resolved[1520]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 6 01:20:24.130761 systemd-resolved[1520]: Using system hostname 'ci-3510.3.8-n-34c19deec5'. Sep 6 01:20:24.132341 systemd[1]: Started systemd-resolved.service. Sep 6 01:20:24.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:24.137167 systemd[1]: Reached target network.target. Sep 6 01:20:24.141981 systemd[1]: Reached target nss-lookup.target. Sep 6 01:20:24.190617 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 6 01:20:24.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:24.198000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 6 01:20:24.198000 audit[1539]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffffd510350 a2=420 a3=0 items=0 ppid=1516 pid=1539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:20:24.198000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 6 01:20:24.213960 augenrules[1539]: No rules Sep 6 01:20:24.215379 systemd[1]: Finished audit-rules.service. Sep 6 01:20:24.272129 systemd-timesyncd[1522]: Contacted time server 50.117.3.95:123 (0.flatcar.pool.ntp.org). Sep 6 01:20:24.272201 systemd-timesyncd[1522]: Initial clock synchronization to Sat 2025-09-06 01:20:24.268955 UTC. Sep 6 01:20:24.357381 systemd-networkd[1302]: eth0: Gained IPv6LL Sep 6 01:20:24.359075 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 6 01:20:24.364854 systemd[1]: Reached target network-online.target. Sep 6 01:20:29.410760 ldconfig[1365]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 6 01:20:29.423385 systemd[1]: Finished ldconfig.service. Sep 6 01:20:29.429544 systemd[1]: Starting systemd-update-done.service... Sep 6 01:20:29.476697 systemd[1]: Finished systemd-update-done.service. Sep 6 01:20:29.481427 systemd[1]: Reached target sysinit.target. Sep 6 01:20:29.485393 systemd[1]: Started motdgen.path. Sep 6 01:20:29.488959 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 6 01:20:29.495141 systemd[1]: Started logrotate.timer. Sep 6 01:20:29.499617 systemd[1]: Started mdadm.timer. Sep 6 01:20:29.503566 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 6 01:20:29.508923 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 6 01:20:29.508953 systemd[1]: Reached target paths.target. Sep 6 01:20:29.512735 systemd[1]: Reached target timers.target. Sep 6 01:20:29.517649 systemd[1]: Listening on dbus.socket. Sep 6 01:20:29.522425 systemd[1]: Starting docker.socket... Sep 6 01:20:29.538952 systemd[1]: Listening on sshd.socket. Sep 6 01:20:29.542640 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 01:20:29.543026 systemd[1]: Listening on docker.socket. Sep 6 01:20:29.547711 systemd[1]: Reached target sockets.target. Sep 6 01:20:29.552476 systemd[1]: Reached target basic.target. Sep 6 01:20:29.556599 systemd[1]: System is tainted: cgroupsv1 Sep 6 01:20:29.556645 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 6 01:20:29.556666 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 6 01:20:29.557807 systemd[1]: Starting containerd.service... Sep 6 01:20:29.562438 systemd[1]: Starting dbus.service... Sep 6 01:20:29.566463 systemd[1]: Starting enable-oem-cloudinit.service... Sep 6 01:20:29.571675 systemd[1]: Starting extend-filesystems.service... Sep 6 01:20:29.575816 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 6 01:20:29.576968 systemd[1]: Starting kubelet.service... Sep 6 01:20:29.581313 systemd[1]: Starting motdgen.service... Sep 6 01:20:29.585472 systemd[1]: Started nvidia.service. Sep 6 01:20:29.590457 systemd[1]: Starting prepare-helm.service... Sep 6 01:20:29.596042 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 6 01:20:29.602312 systemd[1]: Starting sshd-keygen.service... Sep 6 01:20:29.608081 systemd[1]: Starting systemd-logind.service... Sep 6 01:20:29.612129 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 01:20:29.612192 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 6 01:20:29.613217 systemd[1]: Starting update-engine.service... Sep 6 01:20:29.617994 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 6 01:20:29.626141 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 6 01:20:29.626425 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 6 01:20:29.640439 systemd[1]: motdgen.service: Deactivated successfully. Sep 6 01:20:29.640681 systemd[1]: Finished motdgen.service. Sep 6 01:20:29.669488 extend-filesystems[1556]: Found loop1 Sep 6 01:20:29.675432 extend-filesystems[1556]: Found sda Sep 6 01:20:29.675432 extend-filesystems[1556]: Found sda1 Sep 6 01:20:29.675432 extend-filesystems[1556]: Found sda2 Sep 6 01:20:29.675432 extend-filesystems[1556]: Found sda3 Sep 6 01:20:29.675432 extend-filesystems[1556]: Found usr Sep 6 01:20:29.675432 extend-filesystems[1556]: Found sda4 Sep 6 01:20:29.675432 extend-filesystems[1556]: Found sda6 Sep 6 01:20:29.675432 extend-filesystems[1556]: Found sda7 Sep 6 01:20:29.675432 extend-filesystems[1556]: Found sda9 Sep 6 01:20:29.675432 extend-filesystems[1556]: Checking size of /dev/sda9 Sep 6 01:20:29.788717 jq[1575]: true Sep 6 01:20:29.788890 jq[1555]: false Sep 6 01:20:29.788948 tar[1581]: linux-arm64/helm Sep 6 01:20:29.689806 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 6 01:20:29.789217 env[1586]: time="2025-09-06T01:20:29.712168329Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 6 01:20:29.789483 extend-filesystems[1556]: Old size kept for /dev/sda9 Sep 6 01:20:29.789483 extend-filesystems[1556]: Found sr0 Sep 6 01:20:29.690046 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 6 01:20:29.822488 jq[1599]: true Sep 6 01:20:29.822636 env[1586]: time="2025-09-06T01:20:29.806885303Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 6 01:20:29.822636 env[1586]: time="2025-09-06T01:20:29.807185619Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 6 01:20:29.822636 env[1586]: time="2025-09-06T01:20:29.808363088Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.190-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 6 01:20:29.822636 env[1586]: time="2025-09-06T01:20:29.808391764Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 6 01:20:29.822636 env[1586]: time="2025-09-06T01:20:29.808658885Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 01:20:29.822636 env[1586]: time="2025-09-06T01:20:29.808676522Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 6 01:20:29.822636 env[1586]: time="2025-09-06T01:20:29.808689400Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 6 01:20:29.822636 env[1586]: time="2025-09-06T01:20:29.808699559Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 6 01:20:29.822636 env[1586]: time="2025-09-06T01:20:29.808765789Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 6 01:20:29.822636 env[1586]: time="2025-09-06T01:20:29.808961321Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 6 01:20:29.750835 systemd-logind[1571]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Sep 6 01:20:29.823112 env[1586]: time="2025-09-06T01:20:29.809115018Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 01:20:29.823112 env[1586]: time="2025-09-06T01:20:29.809130856Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 6 01:20:29.823112 env[1586]: time="2025-09-06T01:20:29.809184368Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 6 01:20:29.823112 env[1586]: time="2025-09-06T01:20:29.809196487Z" level=info msg="metadata content store policy set" policy=shared Sep 6 01:20:29.751938 systemd-logind[1571]: New seat seat0. Sep 6 01:20:29.761287 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 6 01:20:29.761514 systemd[1]: Finished extend-filesystems.service. Sep 6 01:20:29.854275 env[1586]: time="2025-09-06T01:20:29.852256139Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 6 01:20:29.854275 env[1586]: time="2025-09-06T01:20:29.852310651Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 6 01:20:29.854275 env[1586]: time="2025-09-06T01:20:29.852327329Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 6 01:20:29.854275 env[1586]: time="2025-09-06T01:20:29.852364603Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 6 01:20:29.854275 env[1586]: time="2025-09-06T01:20:29.852382041Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 6 01:20:29.854275 env[1586]: time="2025-09-06T01:20:29.852396559Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 6 01:20:29.854275 env[1586]: time="2025-09-06T01:20:29.852408877Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 6 01:20:29.854275 env[1586]: time="2025-09-06T01:20:29.852768745Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 6 01:20:29.854275 env[1586]: time="2025-09-06T01:20:29.852788502Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 6 01:20:29.854275 env[1586]: time="2025-09-06T01:20:29.852801540Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 6 01:20:29.854275 env[1586]: time="2025-09-06T01:20:29.852814218Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 6 01:20:29.854275 env[1586]: time="2025-09-06T01:20:29.852827816Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 6 01:20:29.854275 env[1586]: time="2025-09-06T01:20:29.852962156Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 6 01:20:29.854275 env[1586]: time="2025-09-06T01:20:29.853034066Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 6 01:20:29.854625 env[1586]: time="2025-09-06T01:20:29.853360378Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 6 01:20:29.854625 env[1586]: time="2025-09-06T01:20:29.853388374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 6 01:20:29.854625 env[1586]: time="2025-09-06T01:20:29.853404652Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 6 01:20:29.854625 env[1586]: time="2025-09-06T01:20:29.853449086Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 6 01:20:29.854625 env[1586]: time="2025-09-06T01:20:29.853463563Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 6 01:20:29.854625 env[1586]: time="2025-09-06T01:20:29.853476202Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 6 01:20:29.854625 env[1586]: time="2025-09-06T01:20:29.853487880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 6 01:20:29.854625 env[1586]: time="2025-09-06T01:20:29.853500038Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 6 01:20:29.854625 env[1586]: time="2025-09-06T01:20:29.853511916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 6 01:20:29.854625 env[1586]: time="2025-09-06T01:20:29.853522635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 6 01:20:29.854625 env[1586]: time="2025-09-06T01:20:29.853534553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 6 01:20:29.854625 env[1586]: time="2025-09-06T01:20:29.853548111Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 6 01:20:29.854625 env[1586]: time="2025-09-06T01:20:29.853667774Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 6 01:20:29.854625 env[1586]: time="2025-09-06T01:20:29.853682452Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 6 01:20:29.854625 env[1586]: time="2025-09-06T01:20:29.853694810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 6 01:20:29.854889 env[1586]: time="2025-09-06T01:20:29.853706568Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 6 01:20:29.854889 env[1586]: time="2025-09-06T01:20:29.853720806Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 6 01:20:29.854889 env[1586]: time="2025-09-06T01:20:29.853731045Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 6 01:20:29.854889 env[1586]: time="2025-09-06T01:20:29.853748042Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 6 01:20:29.854889 env[1586]: time="2025-09-06T01:20:29.853781757Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 6 01:20:29.854982 env[1586]: time="2025-09-06T01:20:29.853978848Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 6 01:20:29.854982 env[1586]: time="2025-09-06T01:20:29.854031401Z" level=info msg="Connect containerd service" Sep 6 01:20:29.854982 env[1586]: time="2025-09-06T01:20:29.854064716Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 6 01:20:29.870854 env[1586]: time="2025-09-06T01:20:29.855490348Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 01:20:29.870854 env[1586]: time="2025-09-06T01:20:29.855734073Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 6 01:20:29.870854 env[1586]: time="2025-09-06T01:20:29.855770828Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 6 01:20:29.870854 env[1586]: time="2025-09-06T01:20:29.855816061Z" level=info msg="containerd successfully booted in 0.144395s" Sep 6 01:20:29.870854 env[1586]: time="2025-09-06T01:20:29.858635371Z" level=info msg="Start subscribing containerd event" Sep 6 01:20:29.870854 env[1586]: time="2025-09-06T01:20:29.858795907Z" level=info msg="Start recovering state" Sep 6 01:20:29.870854 env[1586]: time="2025-09-06T01:20:29.859308913Z" level=info msg="Start event monitor" Sep 6 01:20:29.870854 env[1586]: time="2025-09-06T01:20:29.859944300Z" level=info msg="Start snapshots syncer" Sep 6 01:20:29.860632 dbus-daemon[1554]: [system] SELinux support is enabled Sep 6 01:20:29.855907 systemd[1]: Started containerd.service. Sep 6 01:20:29.867866 dbus-daemon[1554]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 6 01:20:29.860792 systemd[1]: Started dbus.service. Sep 6 01:20:29.867323 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 6 01:20:29.867342 systemd[1]: Reached target system-config.target. Sep 6 01:20:29.876785 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 6 01:20:29.876807 systemd[1]: Reached target user-config.target. Sep 6 01:20:29.880069 env[1586]: time="2025-09-06T01:20:29.880028417Z" level=info msg="Start cni network conf syncer for default" Sep 6 01:20:29.880160 env[1586]: time="2025-09-06T01:20:29.880146640Z" level=info msg="Start streaming server" Sep 6 01:20:29.883603 systemd[1]: Started systemd-logind.service. Sep 6 01:20:29.890539 systemd[1]: nvidia.service: Deactivated successfully. Sep 6 01:20:29.915605 bash[1624]: Updated "/home/core/.ssh/authorized_keys" Sep 6 01:20:29.916497 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 6 01:20:30.273794 tar[1581]: linux-arm64/LICENSE Sep 6 01:20:30.273988 tar[1581]: linux-arm64/README.md Sep 6 01:20:30.278514 systemd[1]: Finished prepare-helm.service. Sep 6 01:20:30.327794 update_engine[1573]: I0906 01:20:30.312364 1573 main.cc:92] Flatcar Update Engine starting Sep 6 01:20:30.371697 systemd[1]: Started update-engine.service. Sep 6 01:20:30.371975 update_engine[1573]: I0906 01:20:30.371750 1573 update_check_scheduler.cc:74] Next update check in 6m59s Sep 6 01:20:30.378053 systemd[1]: Started locksmithd.service. Sep 6 01:20:30.672380 systemd[1]: Started kubelet.service. Sep 6 01:20:31.105862 kubelet[1674]: E0906 01:20:31.105771 1674 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 01:20:31.107653 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 01:20:31.107792 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 01:20:31.488857 sshd_keygen[1577]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 6 01:20:31.506626 systemd[1]: Finished sshd-keygen.service. Sep 6 01:20:31.512444 systemd[1]: Starting issuegen.service... Sep 6 01:20:31.516986 systemd[1]: Started waagent.service. Sep 6 01:20:31.521416 systemd[1]: issuegen.service: Deactivated successfully. Sep 6 01:20:31.521655 systemd[1]: Finished issuegen.service. Sep 6 01:20:31.527415 systemd[1]: Starting systemd-user-sessions.service... Sep 6 01:20:31.563164 systemd[1]: Finished systemd-user-sessions.service. Sep 6 01:20:31.569765 systemd[1]: Started getty@tty1.service. Sep 6 01:20:31.575938 systemd[1]: Started serial-getty@ttyAMA0.service. Sep 6 01:20:31.580501 systemd[1]: Reached target getty.target. Sep 6 01:20:31.584459 systemd[1]: Reached target multi-user.target. Sep 6 01:20:31.590026 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 6 01:20:31.598336 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 6 01:20:31.598543 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 6 01:20:31.605743 systemd[1]: Startup finished in 13.598s (kernel) + 19.089s (userspace) = 32.687s. Sep 6 01:20:31.647745 locksmithd[1669]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 6 01:20:32.134538 login[1703]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Sep 6 01:20:32.135275 login[1702]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 6 01:20:32.187763 systemd[1]: Created slice user-500.slice. Sep 6 01:20:32.188780 systemd[1]: Starting user-runtime-dir@500.service... Sep 6 01:20:32.191285 systemd-logind[1571]: New session 1 of user core. Sep 6 01:20:32.233635 systemd[1]: Finished user-runtime-dir@500.service. Sep 6 01:20:32.234882 systemd[1]: Starting user@500.service... Sep 6 01:20:32.265802 (systemd)[1709]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:20:32.471793 systemd[1709]: Queued start job for default target default.target. Sep 6 01:20:32.472669 systemd[1709]: Reached target paths.target. Sep 6 01:20:32.472787 systemd[1709]: Reached target sockets.target. Sep 6 01:20:32.472861 systemd[1709]: Reached target timers.target. Sep 6 01:20:32.472935 systemd[1709]: Reached target basic.target. Sep 6 01:20:32.473041 systemd[1709]: Reached target default.target. Sep 6 01:20:32.473116 systemd[1]: Started user@500.service. Sep 6 01:20:32.473921 systemd[1709]: Startup finished in 201ms. Sep 6 01:20:32.473996 systemd[1]: Started session-1.scope. Sep 6 01:20:33.135970 login[1703]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 6 01:20:33.140044 systemd-logind[1571]: New session 2 of user core. Sep 6 01:20:33.140455 systemd[1]: Started session-2.scope. Sep 6 01:20:37.212500 waagent[1697]: 2025-09-06T01:20:37.212387Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Sep 6 01:20:37.232911 waagent[1697]: 2025-09-06T01:20:37.232822Z INFO Daemon Daemon OS: flatcar 3510.3.8 Sep 6 01:20:37.237612 waagent[1697]: 2025-09-06T01:20:37.237543Z INFO Daemon Daemon Python: 3.9.16 Sep 6 01:20:37.242166 waagent[1697]: 2025-09-06T01:20:37.242087Z INFO Daemon Daemon Run daemon Sep 6 01:20:37.247592 waagent[1697]: 2025-09-06T01:20:37.247298Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.8' Sep 6 01:20:37.264889 waagent[1697]: 2025-09-06T01:20:37.264739Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Sep 6 01:20:37.279854 waagent[1697]: 2025-09-06T01:20:37.279707Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Sep 6 01:20:37.289498 waagent[1697]: 2025-09-06T01:20:37.289409Z INFO Daemon Daemon cloud-init is enabled: False Sep 6 01:20:37.294736 waagent[1697]: 2025-09-06T01:20:37.294660Z INFO Daemon Daemon Using waagent for provisioning Sep 6 01:20:37.300332 waagent[1697]: 2025-09-06T01:20:37.300262Z INFO Daemon Daemon Activate resource disk Sep 6 01:20:37.304911 waagent[1697]: 2025-09-06T01:20:37.304842Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Sep 6 01:20:37.319188 waagent[1697]: 2025-09-06T01:20:37.319103Z INFO Daemon Daemon Found device: None Sep 6 01:20:37.323755 waagent[1697]: 2025-09-06T01:20:37.323678Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Sep 6 01:20:37.332221 waagent[1697]: 2025-09-06T01:20:37.332146Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Sep 6 01:20:37.344175 waagent[1697]: 2025-09-06T01:20:37.344108Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 6 01:20:37.349987 waagent[1697]: 2025-09-06T01:20:37.349919Z INFO Daemon Daemon Running default provisioning handler Sep 6 01:20:37.363601 waagent[1697]: 2025-09-06T01:20:37.363453Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Sep 6 01:20:37.378809 waagent[1697]: 2025-09-06T01:20:37.378667Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Sep 6 01:20:37.388969 waagent[1697]: 2025-09-06T01:20:37.388891Z INFO Daemon Daemon cloud-init is enabled: False Sep 6 01:20:37.394093 waagent[1697]: 2025-09-06T01:20:37.394024Z INFO Daemon Daemon Copying ovf-env.xml Sep 6 01:20:37.461680 waagent[1697]: 2025-09-06T01:20:37.461366Z INFO Daemon Daemon Successfully mounted dvd Sep 6 01:20:37.532956 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Sep 6 01:20:37.606750 waagent[1697]: 2025-09-06T01:20:37.606604Z INFO Daemon Daemon Detect protocol endpoint Sep 6 01:20:37.612048 waagent[1697]: 2025-09-06T01:20:37.611970Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 6 01:20:37.617398 waagent[1697]: 2025-09-06T01:20:37.617332Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Sep 6 01:20:37.623690 waagent[1697]: 2025-09-06T01:20:37.623618Z INFO Daemon Daemon Test for route to 168.63.129.16 Sep 6 01:20:37.628946 waagent[1697]: 2025-09-06T01:20:37.628882Z INFO Daemon Daemon Route to 168.63.129.16 exists Sep 6 01:20:37.634814 waagent[1697]: 2025-09-06T01:20:37.634754Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Sep 6 01:20:37.727863 waagent[1697]: 2025-09-06T01:20:37.727796Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Sep 6 01:20:37.735425 waagent[1697]: 2025-09-06T01:20:37.735380Z INFO Daemon Daemon Wire protocol version:2012-11-30 Sep 6 01:20:37.741038 waagent[1697]: 2025-09-06T01:20:37.740975Z INFO Daemon Daemon Server preferred version:2015-04-05 Sep 6 01:20:38.460868 waagent[1697]: 2025-09-06T01:20:38.460728Z INFO Daemon Daemon Initializing goal state during protocol detection Sep 6 01:20:38.476164 waagent[1697]: 2025-09-06T01:20:38.476082Z INFO Daemon Daemon Forcing an update of the goal state.. Sep 6 01:20:38.482490 waagent[1697]: 2025-09-06T01:20:38.482411Z INFO Daemon Daemon Fetching goal state [incarnation 1] Sep 6 01:20:38.635618 waagent[1697]: 2025-09-06T01:20:38.635494Z INFO Daemon Daemon Found private key matching thumbprint A2D6D7C13F4F12CF56A86D49DFC0C368C5E0A4CD Sep 6 01:20:38.644042 waagent[1697]: 2025-09-06T01:20:38.643960Z INFO Daemon Daemon Fetch goal state completed Sep 6 01:20:38.695876 waagent[1697]: 2025-09-06T01:20:38.695819Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: c98995fd-368d-41e1-a4be-73e527a5f385 New eTag: 17497916555422493163] Sep 6 01:20:38.707188 waagent[1697]: 2025-09-06T01:20:38.707106Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Sep 6 01:20:38.759824 waagent[1697]: 2025-09-06T01:20:38.759705Z INFO Daemon Daemon Starting provisioning Sep 6 01:20:38.765110 waagent[1697]: 2025-09-06T01:20:38.765030Z INFO Daemon Daemon Handle ovf-env.xml. Sep 6 01:20:38.769945 waagent[1697]: 2025-09-06T01:20:38.769878Z INFO Daemon Daemon Set hostname [ci-3510.3.8-n-34c19deec5] Sep 6 01:20:38.809779 waagent[1697]: 2025-09-06T01:20:38.809654Z INFO Daemon Daemon Publish hostname [ci-3510.3.8-n-34c19deec5] Sep 6 01:20:38.816457 waagent[1697]: 2025-09-06T01:20:38.816379Z INFO Daemon Daemon Examine /proc/net/route for primary interface Sep 6 01:20:38.822882 waagent[1697]: 2025-09-06T01:20:38.822818Z INFO Daemon Daemon Primary interface is [eth0] Sep 6 01:20:38.838850 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Sep 6 01:20:38.839063 systemd[1]: Stopped systemd-networkd-wait-online.service. Sep 6 01:20:38.839117 systemd[1]: Stopping systemd-networkd-wait-online.service... Sep 6 01:20:38.839334 systemd[1]: Stopping systemd-networkd.service... Sep 6 01:20:38.843314 systemd-networkd[1302]: eth0: DHCPv6 lease lost Sep 6 01:20:38.845450 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 6 01:20:38.845690 systemd[1]: Stopped systemd-networkd.service. Sep 6 01:20:38.847666 systemd[1]: Starting systemd-networkd.service... Sep 6 01:20:38.881749 systemd-networkd[1753]: enP26380s1: Link UP Sep 6 01:20:38.881761 systemd-networkd[1753]: enP26380s1: Gained carrier Sep 6 01:20:38.882768 systemd-networkd[1753]: eth0: Link UP Sep 6 01:20:38.882778 systemd-networkd[1753]: eth0: Gained carrier Sep 6 01:20:38.883114 systemd-networkd[1753]: lo: Link UP Sep 6 01:20:38.883123 systemd-networkd[1753]: lo: Gained carrier Sep 6 01:20:38.883437 systemd-networkd[1753]: eth0: Gained IPv6LL Sep 6 01:20:38.884562 systemd-networkd[1753]: Enumeration completed Sep 6 01:20:38.884692 systemd[1]: Started systemd-networkd.service. Sep 6 01:20:38.886414 waagent[1697]: 2025-09-06T01:20:38.886236Z INFO Daemon Daemon Create user account if not exists Sep 6 01:20:38.886513 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 6 01:20:38.893120 waagent[1697]: 2025-09-06T01:20:38.893030Z INFO Daemon Daemon User core already exists, skip useradd Sep 6 01:20:38.899772 waagent[1697]: 2025-09-06T01:20:38.899675Z INFO Daemon Daemon Configure sudoer Sep 6 01:20:38.899996 systemd-networkd[1753]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 01:20:38.905553 waagent[1697]: 2025-09-06T01:20:38.905458Z INFO Daemon Daemon Configure sshd Sep 6 01:20:38.910165 waagent[1697]: 2025-09-06T01:20:38.910070Z INFO Daemon Daemon Deploy ssh public key. Sep 6 01:20:38.925323 systemd-networkd[1753]: eth0: DHCPv4 address 10.200.20.27/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 6 01:20:38.929769 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 6 01:20:40.106926 waagent[1697]: 2025-09-06T01:20:40.106864Z INFO Daemon Daemon Provisioning complete Sep 6 01:20:40.127790 waagent[1697]: 2025-09-06T01:20:40.127726Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Sep 6 01:20:40.134690 waagent[1697]: 2025-09-06T01:20:40.134606Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Sep 6 01:20:40.145370 waagent[1697]: 2025-09-06T01:20:40.145284Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Sep 6 01:20:40.444337 waagent[1760]: 2025-09-06T01:20:40.444230Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Sep 6 01:20:40.445082 waagent[1760]: 2025-09-06T01:20:40.445020Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 6 01:20:40.445207 waagent[1760]: 2025-09-06T01:20:40.445162Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 6 01:20:40.457859 waagent[1760]: 2025-09-06T01:20:40.457788Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Sep 6 01:20:40.458032 waagent[1760]: 2025-09-06T01:20:40.457985Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Sep 6 01:20:40.515965 waagent[1760]: 2025-09-06T01:20:40.515824Z INFO ExtHandler ExtHandler Found private key matching thumbprint A2D6D7C13F4F12CF56A86D49DFC0C368C5E0A4CD Sep 6 01:20:40.516279 waagent[1760]: 2025-09-06T01:20:40.516203Z INFO ExtHandler ExtHandler Fetch goal state completed Sep 6 01:20:40.530573 waagent[1760]: 2025-09-06T01:20:40.530523Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 3894d934-a2cd-437e-be6e-ac13f02fe749 New eTag: 17497916555422493163] Sep 6 01:20:40.531146 waagent[1760]: 2025-09-06T01:20:40.531088Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Sep 6 01:20:40.589736 waagent[1760]: 2025-09-06T01:20:40.589588Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.8; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Sep 6 01:20:40.615660 waagent[1760]: 2025-09-06T01:20:40.614013Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1760 Sep 6 01:20:40.620150 waagent[1760]: 2025-09-06T01:20:40.620083Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.8', '', 'Flatcar Container Linux by Kinvolk'] Sep 6 01:20:40.621536 waagent[1760]: 2025-09-06T01:20:40.621481Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Sep 6 01:20:41.095993 waagent[1760]: 2025-09-06T01:20:41.095936Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Sep 6 01:20:41.096603 waagent[1760]: 2025-09-06T01:20:41.096548Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Sep 6 01:20:41.104793 waagent[1760]: 2025-09-06T01:20:41.104743Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Sep 6 01:20:41.105442 waagent[1760]: 2025-09-06T01:20:41.105389Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Sep 6 01:20:41.106697 waagent[1760]: 2025-09-06T01:20:41.106638Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Sep 6 01:20:41.108126 waagent[1760]: 2025-09-06T01:20:41.108064Z INFO ExtHandler ExtHandler Starting env monitor service. Sep 6 01:20:41.108422 waagent[1760]: 2025-09-06T01:20:41.108353Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 6 01:20:41.109115 waagent[1760]: 2025-09-06T01:20:41.109047Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 6 01:20:41.109696 waagent[1760]: 2025-09-06T01:20:41.109634Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Sep 6 01:20:41.110020 waagent[1760]: 2025-09-06T01:20:41.109961Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Sep 6 01:20:41.110020 waagent[1760]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Sep 6 01:20:41.110020 waagent[1760]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Sep 6 01:20:41.110020 waagent[1760]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Sep 6 01:20:41.110020 waagent[1760]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Sep 6 01:20:41.110020 waagent[1760]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 6 01:20:41.110020 waagent[1760]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 6 01:20:41.112189 waagent[1760]: 2025-09-06T01:20:41.112029Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Sep 6 01:20:41.112781 waagent[1760]: 2025-09-06T01:20:41.112703Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 6 01:20:41.113343 waagent[1760]: 2025-09-06T01:20:41.113278Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 6 01:20:41.113900 waagent[1760]: 2025-09-06T01:20:41.113832Z INFO EnvHandler ExtHandler Configure routes Sep 6 01:20:41.114047 waagent[1760]: 2025-09-06T01:20:41.114001Z INFO EnvHandler ExtHandler Gateway:None Sep 6 01:20:41.114161 waagent[1760]: 2025-09-06T01:20:41.114119Z INFO EnvHandler ExtHandler Routes:None Sep 6 01:20:41.115017 waagent[1760]: 2025-09-06T01:20:41.114960Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Sep 6 01:20:41.115161 waagent[1760]: 2025-09-06T01:20:41.115096Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Sep 6 01:20:41.115893 waagent[1760]: 2025-09-06T01:20:41.115805Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Sep 6 01:20:41.116057 waagent[1760]: 2025-09-06T01:20:41.115991Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Sep 6 01:20:41.116362 waagent[1760]: 2025-09-06T01:20:41.116294Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Sep 6 01:20:41.127532 waagent[1760]: 2025-09-06T01:20:41.127464Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Sep 6 01:20:41.128970 waagent[1760]: 2025-09-06T01:20:41.128923Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Sep 6 01:20:41.129946 waagent[1760]: 2025-09-06T01:20:41.129893Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Sep 6 01:20:41.150560 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 6 01:20:41.150705 systemd[1]: Stopped kubelet.service. Sep 6 01:20:41.152031 systemd[1]: Starting kubelet.service... Sep 6 01:20:41.157454 waagent[1760]: 2025-09-06T01:20:41.157333Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1753' Sep 6 01:20:41.189973 waagent[1760]: 2025-09-06T01:20:41.189905Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Sep 6 01:20:41.296283 systemd[1]: Started kubelet.service. Sep 6 01:20:41.372646 kubelet[1794]: E0906 01:20:41.372551 1794 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 01:20:41.375041 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 01:20:41.375185 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 01:20:41.572873 waagent[1760]: 2025-09-06T01:20:41.572806Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.14.0.1 -- exiting Sep 6 01:20:42.149603 waagent[1697]: 2025-09-06T01:20:42.149482Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Sep 6 01:20:42.155381 waagent[1697]: 2025-09-06T01:20:42.155322Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.14.0.1 to be the latest agent Sep 6 01:20:43.415820 waagent[1803]: 2025-09-06T01:20:43.415723Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.14.0.1) Sep 6 01:20:43.416537 waagent[1803]: 2025-09-06T01:20:43.416478Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.8 Sep 6 01:20:43.416683 waagent[1803]: 2025-09-06T01:20:43.416638Z INFO ExtHandler ExtHandler Python: 3.9.16 Sep 6 01:20:43.416828 waagent[1803]: 2025-09-06T01:20:43.416784Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Sep 6 01:20:43.429769 waagent[1803]: 2025-09-06T01:20:43.429660Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.8; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; Arch: aarch64; systemd: True; systemd_version: systemd 252 (252); LISDrivers: Absent; logrotate: logrotate 3.20.1; Sep 6 01:20:43.430177 waagent[1803]: 2025-09-06T01:20:43.430121Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 6 01:20:43.430359 waagent[1803]: 2025-09-06T01:20:43.430313Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 6 01:20:43.430580 waagent[1803]: 2025-09-06T01:20:43.430533Z INFO ExtHandler ExtHandler Initializing the goal state... Sep 6 01:20:43.443769 waagent[1803]: 2025-09-06T01:20:43.443706Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 6 01:20:43.457175 waagent[1803]: 2025-09-06T01:20:43.457116Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Sep 6 01:20:43.458243 waagent[1803]: 2025-09-06T01:20:43.458184Z INFO ExtHandler Sep 6 01:20:43.458421 waagent[1803]: 2025-09-06T01:20:43.458373Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 4938c9f8-2f05-4de6-b3dd-dc776852d50c eTag: 17497916555422493163 source: Fabric] Sep 6 01:20:43.459170 waagent[1803]: 2025-09-06T01:20:43.459114Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Sep 6 01:20:43.460436 waagent[1803]: 2025-09-06T01:20:43.460379Z INFO ExtHandler Sep 6 01:20:43.460580 waagent[1803]: 2025-09-06T01:20:43.460536Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Sep 6 01:20:43.467909 waagent[1803]: 2025-09-06T01:20:43.467863Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Sep 6 01:20:43.468411 waagent[1803]: 2025-09-06T01:20:43.468364Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Sep 6 01:20:43.487451 waagent[1803]: 2025-09-06T01:20:43.487396Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Sep 6 01:20:43.553366 waagent[1803]: 2025-09-06T01:20:43.553212Z INFO ExtHandler Downloaded certificate {'thumbprint': 'A2D6D7C13F4F12CF56A86D49DFC0C368C5E0A4CD', 'hasPrivateKey': True} Sep 6 01:20:43.554673 waagent[1803]: 2025-09-06T01:20:43.554610Z INFO ExtHandler Fetch goal state from WireServer completed Sep 6 01:20:43.555607 waagent[1803]: 2025-09-06T01:20:43.555550Z INFO ExtHandler ExtHandler Goal state initialization completed. Sep 6 01:20:43.575412 waagent[1803]: 2025-09-06T01:20:43.575310Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.0.15 3 Sep 2024 (Library: OpenSSL 3.0.15 3 Sep 2024) Sep 6 01:20:43.582685 waagent[1803]: 2025-09-06T01:20:43.582590Z INFO ExtHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules Sep 6 01:20:43.586035 waagent[1803]: 2025-09-06T01:20:43.585935Z INFO ExtHandler ExtHandler Did not find a legacy firewall rule: ['iptables', '-w', '-t', 'security', '-C', 'OUTPUT', '-d', '168.63.129.16', '-p', 'tcp', '-m', 'conntrack', '--ctstate', 'INVALID,NEW', '-j', 'ACCEPT'] Sep 6 01:20:43.586309 waagent[1803]: 2025-09-06T01:20:43.586192Z INFO ExtHandler ExtHandler Checking state of the firewall Sep 6 01:20:43.716905 waagent[1803]: 2025-09-06T01:20:43.716736Z INFO ExtHandler ExtHandler Created firewall rules for Azure Fabric: Sep 6 01:20:43.716905 waagent[1803]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 6 01:20:43.716905 waagent[1803]: pkts bytes target prot opt in out source destination Sep 6 01:20:43.716905 waagent[1803]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 6 01:20:43.716905 waagent[1803]: pkts bytes target prot opt in out source destination Sep 6 01:20:43.716905 waagent[1803]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 6 01:20:43.716905 waagent[1803]: pkts bytes target prot opt in out source destination Sep 6 01:20:43.716905 waagent[1803]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 6 01:20:43.716905 waagent[1803]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 6 01:20:43.716905 waagent[1803]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 6 01:20:43.717975 waagent[1803]: 2025-09-06T01:20:43.717915Z INFO ExtHandler ExtHandler Setting up persistent firewall rules Sep 6 01:20:43.720683 waagent[1803]: 2025-09-06T01:20:43.720571Z INFO ExtHandler ExtHandler The firewalld service is not present on the system Sep 6 01:20:43.720948 waagent[1803]: 2025-09-06T01:20:43.720896Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Sep 6 01:20:43.721331 waagent[1803]: 2025-09-06T01:20:43.721275Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Sep 6 01:20:43.728793 waagent[1803]: 2025-09-06T01:20:43.728733Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Sep 6 01:20:43.729296 waagent[1803]: 2025-09-06T01:20:43.729223Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Sep 6 01:20:43.737547 waagent[1803]: 2025-09-06T01:20:43.737476Z INFO ExtHandler ExtHandler WALinuxAgent-2.14.0.1 running as process 1803 Sep 6 01:20:43.740876 waagent[1803]: 2025-09-06T01:20:43.740815Z INFO ExtHandler ExtHandler [CGI] Cgroups is not currently supported on ['flatcar', '3510.3.8', '', 'Flatcar Container Linux by Kinvolk'] Sep 6 01:20:43.741736 waagent[1803]: 2025-09-06T01:20:43.741679Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case cgroup usage went from enabled to disabled Sep 6 01:20:43.742643 waagent[1803]: 2025-09-06T01:20:43.742588Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Sep 6 01:20:43.745403 waagent[1803]: 2025-09-06T01:20:43.745348Z INFO ExtHandler ExtHandler Signing certificate written to /var/lib/waagent/microsoft_root_certificate.pem Sep 6 01:20:43.745746 waagent[1803]: 2025-09-06T01:20:43.745694Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Sep 6 01:20:43.747136 waagent[1803]: 2025-09-06T01:20:43.747067Z INFO ExtHandler ExtHandler Starting env monitor service. Sep 6 01:20:43.747830 waagent[1803]: 2025-09-06T01:20:43.747772Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 6 01:20:43.748096 waagent[1803]: 2025-09-06T01:20:43.748048Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 6 01:20:43.748756 waagent[1803]: 2025-09-06T01:20:43.748704Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Sep 6 01:20:43.749198 waagent[1803]: 2025-09-06T01:20:43.749147Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Sep 6 01:20:43.749198 waagent[1803]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Sep 6 01:20:43.749198 waagent[1803]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Sep 6 01:20:43.749198 waagent[1803]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Sep 6 01:20:43.749198 waagent[1803]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Sep 6 01:20:43.749198 waagent[1803]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 6 01:20:43.749198 waagent[1803]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 6 01:20:43.750114 waagent[1803]: 2025-09-06T01:20:43.750038Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Sep 6 01:20:43.751993 waagent[1803]: 2025-09-06T01:20:43.751827Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 6 01:20:43.752394 waagent[1803]: 2025-09-06T01:20:43.752329Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 6 01:20:43.753046 waagent[1803]: 2025-09-06T01:20:43.752968Z INFO EnvHandler ExtHandler Configure routes Sep 6 01:20:43.753474 waagent[1803]: 2025-09-06T01:20:43.753413Z INFO EnvHandler ExtHandler Gateway:None Sep 6 01:20:43.753626 waagent[1803]: 2025-09-06T01:20:43.753575Z INFO EnvHandler ExtHandler Routes:None Sep 6 01:20:43.754835 waagent[1803]: 2025-09-06T01:20:43.754713Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Sep 6 01:20:43.754919 waagent[1803]: 2025-09-06T01:20:43.754866Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Sep 6 01:20:43.757908 waagent[1803]: 2025-09-06T01:20:43.757798Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Sep 6 01:20:43.758071 waagent[1803]: 2025-09-06T01:20:43.758004Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Sep 6 01:20:43.759172 waagent[1803]: 2025-09-06T01:20:43.759089Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Sep 6 01:20:43.772087 waagent[1803]: 2025-09-06T01:20:43.772017Z INFO MonitorHandler ExtHandler Network interfaces: Sep 6 01:20:43.772087 waagent[1803]: Executing ['ip', '-a', '-o', 'link']: Sep 6 01:20:43.772087 waagent[1803]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Sep 6 01:20:43.772087 waagent[1803]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:fd:1f:c0 brd ff:ff:ff:ff:ff:ff Sep 6 01:20:43.772087 waagent[1803]: 3: enP26380s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:fd:1f:c0 brd ff:ff:ff:ff:ff:ff\ altname enP26380p0s2 Sep 6 01:20:43.772087 waagent[1803]: Executing ['ip', '-4', '-a', '-o', 'address']: Sep 6 01:20:43.772087 waagent[1803]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Sep 6 01:20:43.772087 waagent[1803]: 2: eth0 inet 10.200.20.27/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Sep 6 01:20:43.772087 waagent[1803]: Executing ['ip', '-6', '-a', '-o', 'address']: Sep 6 01:20:43.772087 waagent[1803]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Sep 6 01:20:43.772087 waagent[1803]: 2: eth0 inet6 fe80::20d:3aff:fefd:1fc0/64 scope link \ valid_lft forever preferred_lft forever Sep 6 01:20:43.777156 waagent[1803]: 2025-09-06T01:20:43.777053Z INFO ExtHandler ExtHandler Downloading agent manifest Sep 6 01:20:43.792392 waagent[1803]: 2025-09-06T01:20:43.792314Z INFO ExtHandler ExtHandler Sep 6 01:20:43.793718 waagent[1803]: 2025-09-06T01:20:43.793645Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 15163a0e-fc81-4d73-bc7e-01f49ed48b38 correlation 34700ad6-ea7f-47f9-80f8-38c6f0a3dad1 created: 2025-09-06T01:19:18.791306Z] Sep 6 01:20:43.797314 waagent[1803]: 2025-09-06T01:20:43.797230Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Sep 6 01:20:43.802366 waagent[1803]: 2025-09-06T01:20:43.802301Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 9 ms] Sep 6 01:20:43.808832 waagent[1803]: 2025-09-06T01:20:43.808761Z INFO EnvHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules Sep 6 01:20:43.832931 waagent[1803]: 2025-09-06T01:20:43.832857Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Sep 6 01:20:43.844194 waagent[1803]: 2025-09-06T01:20:43.844097Z INFO ExtHandler ExtHandler Looking for existing remote access users. Sep 6 01:20:43.846919 waagent[1803]: 2025-09-06T01:20:43.846856Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.14.0.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: D2CD42EC-7C91-4DAF-9371-1F058EF959B5;UpdateGSErrors: 0;AutoUpdate: 1;UpdateMode: SelfUpdate;] Sep 6 01:20:51.402100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 6 01:20:51.402289 systemd[1]: Stopped kubelet.service. Sep 6 01:20:51.403668 systemd[1]: Starting kubelet.service... Sep 6 01:20:51.727361 systemd[1]: Started kubelet.service. Sep 6 01:20:51.769999 kubelet[1852]: E0906 01:20:51.769937 1852 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 01:20:51.771873 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 01:20:51.772011 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 01:21:01.902113 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 6 01:21:01.902306 systemd[1]: Stopped kubelet.service. Sep 6 01:21:01.903685 systemd[1]: Starting kubelet.service... Sep 6 01:21:02.295068 systemd[1]: Started kubelet.service. Sep 6 01:21:02.330651 kubelet[1867]: E0906 01:21:02.330588 1867 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 01:21:02.332375 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 01:21:02.332529 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 01:21:05.485285 systemd[1]: Created slice system-sshd.slice. Sep 6 01:21:05.486489 systemd[1]: Started sshd@0-10.200.20.27:22-10.200.16.10:34990.service. Sep 6 01:21:08.176096 sshd[1873]: Accepted publickey for core from 10.200.16.10 port 34990 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:21:08.192356 sshd[1873]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:21:08.196065 systemd-logind[1571]: New session 3 of user core. Sep 6 01:21:08.196498 systemd[1]: Started session-3.scope. Sep 6 01:21:08.559713 systemd[1]: Started sshd@1-10.200.20.27:22-10.200.16.10:35004.service. Sep 6 01:21:08.973976 sshd[1878]: Accepted publickey for core from 10.200.16.10 port 35004 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:21:08.975235 sshd[1878]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:21:08.979001 systemd-logind[1571]: New session 4 of user core. Sep 6 01:21:08.979413 systemd[1]: Started session-4.scope. Sep 6 01:21:09.300266 sshd[1878]: pam_unix(sshd:session): session closed for user core Sep 6 01:21:09.303084 systemd-logind[1571]: Session 4 logged out. Waiting for processes to exit. Sep 6 01:21:09.303404 systemd[1]: sshd@1-10.200.20.27:22-10.200.16.10:35004.service: Deactivated successfully. Sep 6 01:21:09.304112 systemd[1]: session-4.scope: Deactivated successfully. Sep 6 01:21:09.305112 systemd-logind[1571]: Removed session 4. Sep 6 01:21:09.388384 systemd[1]: Started sshd@2-10.200.20.27:22-10.200.16.10:35020.service. Sep 6 01:21:09.838767 sshd[1885]: Accepted publickey for core from 10.200.16.10 port 35020 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:21:09.840059 sshd[1885]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:21:09.844095 systemd[1]: Started session-5.scope. Sep 6 01:21:09.844548 systemd-logind[1571]: New session 5 of user core. Sep 6 01:21:10.174642 sshd[1885]: pam_unix(sshd:session): session closed for user core Sep 6 01:21:10.177390 systemd-logind[1571]: Session 5 logged out. Waiting for processes to exit. Sep 6 01:21:10.178115 systemd[1]: sshd@2-10.200.20.27:22-10.200.16.10:35020.service: Deactivated successfully. Sep 6 01:21:10.178869 systemd[1]: session-5.scope: Deactivated successfully. Sep 6 01:21:10.179477 systemd-logind[1571]: Removed session 5. Sep 6 01:21:10.233453 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Sep 6 01:21:10.248473 systemd[1]: Started sshd@3-10.200.20.27:22-10.200.16.10:53384.service. Sep 6 01:21:10.700811 sshd[1892]: Accepted publickey for core from 10.200.16.10 port 53384 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:21:10.702049 sshd[1892]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:21:10.706331 systemd[1]: Started session-6.scope. Sep 6 01:21:10.706509 systemd-logind[1571]: New session 6 of user core. Sep 6 01:21:11.045890 sshd[1892]: pam_unix(sshd:session): session closed for user core Sep 6 01:21:11.048506 systemd-logind[1571]: Session 6 logged out. Waiting for processes to exit. Sep 6 01:21:11.049262 systemd[1]: sshd@3-10.200.20.27:22-10.200.16.10:53384.service: Deactivated successfully. Sep 6 01:21:11.050003 systemd[1]: session-6.scope: Deactivated successfully. Sep 6 01:21:11.050419 systemd-logind[1571]: Removed session 6. Sep 6 01:21:11.112826 systemd[1]: Started sshd@4-10.200.20.27:22-10.200.16.10:53390.service. Sep 6 01:21:11.526485 sshd[1899]: Accepted publickey for core from 10.200.16.10 port 53390 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:21:11.527707 sshd[1899]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:21:11.531433 systemd-logind[1571]: New session 7 of user core. Sep 6 01:21:11.531816 systemd[1]: Started session-7.scope. Sep 6 01:21:12.040544 sudo[1903]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 6 01:21:12.040756 sudo[1903]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 6 01:21:12.084372 dbus-daemon[1554]: avc: received setenforce notice (enforcing=1) Sep 6 01:21:12.085258 sudo[1903]: pam_unix(sudo:session): session closed for user root Sep 6 01:21:12.207881 sshd[1899]: pam_unix(sshd:session): session closed for user core Sep 6 01:21:12.210970 systemd[1]: sshd@4-10.200.20.27:22-10.200.16.10:53390.service: Deactivated successfully. Sep 6 01:21:12.211681 systemd[1]: session-7.scope: Deactivated successfully. Sep 6 01:21:12.212304 systemd-logind[1571]: Session 7 logged out. Waiting for processes to exit. Sep 6 01:21:12.213201 systemd-logind[1571]: Removed session 7. Sep 6 01:21:12.274614 systemd[1]: Started sshd@5-10.200.20.27:22-10.200.16.10:53406.service. Sep 6 01:21:12.402221 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 6 01:21:12.402414 systemd[1]: Stopped kubelet.service. Sep 6 01:21:12.403907 systemd[1]: Starting kubelet.service... Sep 6 01:21:12.788031 sshd[1907]: Accepted publickey for core from 10.200.16.10 port 53406 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:21:12.788692 sshd[1907]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:21:12.793308 systemd[1]: Started session-8.scope. Sep 6 01:21:12.793789 systemd-logind[1571]: New session 8 of user core. Sep 6 01:21:12.857777 systemd[1]: Started kubelet.service. Sep 6 01:21:12.963908 kubelet[1919]: E0906 01:21:12.963857 1919 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 01:21:12.965570 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 01:21:12.965713 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 01:21:12.978795 sudo[1926]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 6 01:21:12.979009 sudo[1926]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 6 01:21:12.981464 sudo[1926]: pam_unix(sudo:session): session closed for user root Sep 6 01:21:12.985724 sudo[1925]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 6 01:21:12.985916 sudo[1925]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 6 01:21:12.994194 systemd[1]: Stopping audit-rules.service... Sep 6 01:21:12.993000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Sep 6 01:21:12.996612 auditctl[1929]: No rules Sep 6 01:21:12.999347 kernel: kauditd_printk_skb: 84 callbacks suppressed Sep 6 01:21:12.999429 kernel: audit: type=1305 audit(1757121672.993:165): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Sep 6 01:21:13.001574 systemd[1]: audit-rules.service: Deactivated successfully. Sep 6 01:21:13.001823 systemd[1]: Stopped audit-rules.service. Sep 6 01:21:13.003638 systemd[1]: Starting audit-rules.service... Sep 6 01:21:12.993000 audit[1929]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff81375d0 a2=420 a3=0 items=0 ppid=1 pid=1929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:21:13.034686 kernel: audit: type=1300 audit(1757121672.993:165): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff81375d0 a2=420 a3=0 items=0 ppid=1 pid=1929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:21:12.993000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Sep 6 01:21:13.041866 kernel: audit: type=1327 audit(1757121672.993:165): proctitle=2F7362696E2F617564697463746C002D44 Sep 6 01:21:13.000000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:21:13.058871 kernel: audit: type=1131 audit(1757121673.000:166): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:21:13.063376 augenrules[1947]: No rules Sep 6 01:21:13.064196 systemd[1]: Finished audit-rules.service. Sep 6 01:21:13.065314 sudo[1925]: pam_unix(sudo:session): session closed for user root Sep 6 01:21:13.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:21:13.064000 audit[1925]: USER_END pid=1925 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 6 01:21:13.099727 kernel: audit: type=1130 audit(1757121673.063:167): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:21:13.099815 kernel: audit: type=1106 audit(1757121673.064:168): pid=1925 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 6 01:21:13.064000 audit[1925]: CRED_DISP pid=1925 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 6 01:21:13.116485 kernel: audit: type=1104 audit(1757121673.064:169): pid=1925 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 6 01:21:13.146638 sshd[1907]: pam_unix(sshd:session): session closed for user core Sep 6 01:21:13.146000 audit[1907]: USER_END pid=1907 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:21:13.170294 systemd[1]: sshd@5-10.200.20.27:22-10.200.16.10:53406.service: Deactivated successfully. Sep 6 01:21:13.146000 audit[1907]: CRED_DISP pid=1907 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:21:13.188362 kernel: audit: type=1106 audit(1757121673.146:170): pid=1907 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:21:13.188456 kernel: audit: type=1104 audit(1757121673.146:171): pid=1907 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:21:13.170987 systemd[1]: session-8.scope: Deactivated successfully. Sep 6 01:21:13.188978 systemd-logind[1571]: Session 8 logged out. Waiting for processes to exit. Sep 6 01:21:13.169000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.20.27:22-10.200.16.10:53406 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:21:13.207113 kernel: audit: type=1131 audit(1757121673.169:172): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.20.27:22-10.200.16.10:53406 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:21:13.207276 systemd-logind[1571]: Removed session 8. Sep 6 01:21:13.213501 systemd[1]: Started sshd@6-10.200.20.27:22-10.200.16.10:53414.service. Sep 6 01:21:13.212000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.27:22-10.200.16.10:53414 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:21:13.623000 audit[1954]: USER_ACCT pid=1954 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:21:13.624826 sshd[1954]: Accepted publickey for core from 10.200.16.10 port 53414 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:21:13.624000 audit[1954]: CRED_ACQ pid=1954 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:21:13.624000 audit[1954]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcffaf950 a2=3 a3=1 items=0 ppid=1 pid=1954 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:21:13.624000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 01:21:13.626388 sshd[1954]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:21:13.630025 systemd-logind[1571]: New session 9 of user core. Sep 6 01:21:13.630424 systemd[1]: Started session-9.scope. Sep 6 01:21:13.633000 audit[1954]: USER_START pid=1954 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:21:13.634000 audit[1957]: CRED_ACQ pid=1957 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:21:13.863000 audit[1958]: USER_ACCT pid=1958 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 6 01:21:13.863000 audit[1958]: CRED_REFR pid=1958 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 6 01:21:13.864470 sudo[1958]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 6 01:21:13.864674 sudo[1958]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 6 01:21:13.864000 audit[1958]: USER_START pid=1958 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 6 01:21:13.885079 systemd[1]: Starting docker.service... Sep 6 01:21:15.920819 update_engine[1573]: I0906 01:21:15.920425 1573 update_attempter.cc:509] Updating boot flags... Sep 6 01:21:16.122176 env[1968]: time="2025-09-06T01:21:16.122125263Z" level=info msg="Starting up" Sep 6 01:21:16.123436 env[1968]: time="2025-09-06T01:21:16.123412294Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 6 01:21:16.123534 env[1968]: time="2025-09-06T01:21:16.123520893Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 6 01:21:16.123596 env[1968]: time="2025-09-06T01:21:16.123582813Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 6 01:21:16.123647 env[1968]: time="2025-09-06T01:21:16.123635693Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 6 01:21:16.125425 env[1968]: time="2025-09-06T01:21:16.125405920Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 6 01:21:16.125516 env[1968]: time="2025-09-06T01:21:16.125503519Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 6 01:21:16.125571 env[1968]: time="2025-09-06T01:21:16.125558759Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 6 01:21:16.125620 env[1968]: time="2025-09-06T01:21:16.125608399Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 6 01:21:18.812553 env[1968]: time="2025-09-06T01:21:18.812512967Z" level=warning msg="Your kernel does not support cgroup blkio weight" Sep 6 01:21:18.812553 env[1968]: time="2025-09-06T01:21:18.812542727Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Sep 6 01:21:18.812941 env[1968]: time="2025-09-06T01:21:18.812670446Z" level=info msg="Loading containers: start." Sep 6 01:21:18.912000 audit[2087]: NETFILTER_CFG table=nat:5 family=2 entries=2 op=nft_register_chain pid=2087 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:21:18.917034 kernel: kauditd_printk_skb: 11 callbacks suppressed Sep 6 01:21:18.917089 kernel: audit: type=1325 audit(1757121678.912:182): table=nat:5 family=2 entries=2 op=nft_register_chain pid=2087 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:21:18.912000 audit[2087]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=fffffed34680 a2=0 a3=1 items=0 ppid=1968 pid=2087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:21:18.951037 kernel: audit: type=1300 audit(1757121678.912:182): arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=fffffed34680 a2=0 a3=1 items=0 ppid=1968 pid=2087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:21:18.951173 kernel: audit: type=1327 audit(1757121678.912:182): proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Sep 6 01:21:18.912000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Sep 6 01:21:18.920000 audit[2089]: NETFILTER_CFG table=filter:6 family=2 entries=2 op=nft_register_chain pid=2089 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:21:18.975305 kernel: audit: type=1325 audit(1757121678.920:183): table=filter:6 family=2 entries=2 op=nft_register_chain pid=2089 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:21:18.975359 kernel: audit: type=1300 audit(1757121678.920:183): arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffd4434140 a2=0 a3=1 items=0 ppid=1968 pid=2089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:21:18.920000 audit[2089]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffd4434140 a2=0 a3=1 items=0 ppid=1968 pid=2089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:21:18.920000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Sep 6 01:21:19.013863 kernel: audit: type=1327 audit(1757121678.920:183): proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Sep 6 01:21:19.014001 kernel: audit: type=1325 audit(1757121678.922:184): table=filter:7 family=2 entries=1 op=nft_register_chain pid=2091 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:21:18.922000 audit[2091]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2091 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:21:18.922000 audit[2091]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffffe55f90 a2=0 a3=1 items=0 ppid=1968 pid=2091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:21:19.053824 kernel: audit: type=1300 audit(1757121678.922:184): arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffffe55f90 a2=0 a3=1 items=0 ppid=1968 pid=2091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:21:19.053936 kernel: audit: type=1327 audit(1757121678.922:184): proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Sep 6 01:21:18.922000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Sep 6 01:21:18.924000 audit[2093]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2093 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:21:19.082037 kernel: audit: type=1325 audit(1757121678.924:185): table=filter:8 family=2 entries=1 op=nft_register_chain pid=2093 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:21:18.924000 audit[2093]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffe295fa50 a2=0 a3=1 items=0 ppid=1968 pid=2093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:21:18.924000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Sep 6 01:21:18.925000 audit[2095]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=2095 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:21:18.925000 audit[2095]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffc421e5b0 a2=0 a3=1 items=0 ppid=1968 pid=2095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:21:18.925000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Sep 6 01:21:18.927000 audit[2097]: NETFILTER_CFG table=filter:10 family=2 entries=1 op=nft_register_rule pid=2097 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:21:18.927000 audit[2097]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffff51ac020 a2=0 a3=1 items=0 ppid=1968 pid=2097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:21:18.927000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Sep 6 01:21:19.092000 audit[2099]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_register_chain pid=2099 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:21:19.092000 audit[2099]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe62d08d0 a2=0 a3=1 items=0 ppid=1968 pid=2099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:21:19.092000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Sep 6 01:21:19.094000 audit[2101]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2101 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:21:19.094000 audit[2101]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=fffff46850e0 a2=0 a3=1 items=0 ppid=1968 pid=2101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:21:19.094000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Sep 6 01:21:19.096000 audit[2103]: NETFILTER_CFG table=filter:13 family=2 entries=2 op=nft_register_chain pid=2103 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:21:19.096000 audit[2103]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=308 a0=3 a1=ffffc1142b10 a2=0 a3=1 items=0 ppid=1968 pid=2103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:21:19.096000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Sep 6 01:21:19.130000 audit[2108]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_unregister_rule pid=2108 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:21:19.130000 audit[2108]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffd50635d0 a2=0 a3=1 items=0 ppid=1968 pid=2108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:21:19.130000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Sep 6 01:21:19.133000 audit[2109]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=2109 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:21:19.133000 audit[2109]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffe1f8d9b0 a2=0 a3=1 items=0 ppid=1968 pid=2109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:21:19.133000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Sep 6 01:21:19.181290 kernel: Initializing XFRM netlink socket Sep 6 01:21:19.204315 env[1968]: time="2025-09-06T01:21:19.204282632Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 6 01:21:19.274000 audit[2116]: NETFILTER_CFG table=nat:16 family=2 entries=2 op=nft_register_chain pid=2116 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:21:19.274000 audit[2116]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=492 a0=3 a1=fffff895a500 a2=0 a3=1 items=0 ppid=1968 pid=2116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:21:19.274000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Sep 6 01:21:19.297000 audit[2119]: NETFILTER_CFG table=nat:17 family=2 entries=1 op=nft_register_rule pid=2119 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:21:19.297000 audit[2119]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffe1128a40 a2=0 a3=1 items=0 ppid=1968 pid=2119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:21:19.297000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Sep 6 01:21:19.300000 audit[2122]: NETFILTER_CFG table=filter:18 family=2 entries=1 op=nft_register_rule pid=2122 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:21:19.300000 audit[2122]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffcdcc9550 a2=0 a3=1 items=0 ppid=1968 pid=2122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:21:19.300000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Sep 6 01:21:19.301000 audit[2124]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=2124 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:21:19.301000 audit[2124]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=fffffdba5370 a2=0 a3=1 items=0 ppid=1968 pid=2124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:21:19.301000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Sep 6 01:21:19.303000 audit[2126]: NETFILTER_CFG table=nat:20 family=2 entries=2 op=nft_register_chain pid=2126 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:21:19.303000 audit[2126]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=356 a0=3 a1=fffffd9f1520 a2=0 a3=1 items=0 ppid=1968 pid=2126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:21:19.303000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Sep 6 01:21:19.304000 audit[2128]: NETFILTER_CFG table=nat:21 family=2 entries=2 op=nft_register_chain pid=2128 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:21:19.304000 audit[2128]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=444 a0=3 a1=ffffdae67da0 a2=0 a3=1 items=0 ppid=1968 pid=2128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:21:19.304000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Sep 6 01:21:19.306000 audit[2130]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=2130 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:21:19.306000 audit[2130]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=304 a0=3 a1=ffffdfc54570 a2=0 a3=1 items=0 ppid=1968 pid=2130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:21:19.306000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Sep 6 01:21:19.308000 audit[2132]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=2132 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:21:19.308000 audit[2132]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=508 a0=3 a1=ffffd75ef0f0 a2=0 a3=1 items=0 ppid=1968 pid=2132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:21:19.308000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Sep 6 01:21:19.309000 audit[2134]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_register_rule pid=2134 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:21:19.309000 audit[2134]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=240 a0=3 a1=ffffeac3bfe0 a2=0 a3=1 items=0 ppid=1968 pid=2134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:21:19.309000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Sep 6 01:21:19.311000 audit[2136]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=2136 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:21:19.311000 audit[2136]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffcec61aa0 a2=0 a3=1 items=0 ppid=1968 pid=2136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:21:19.311000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Sep 6 01:21:19.312000 audit[2138]: NETFILTER_CFG table=filter:26 family=2 entries=1 op=nft_register_rule pid=2138 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:21:19.312000 audit[2138]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=fffff5ec9f00 a2=0 a3=1 items=0 ppid=1968 pid=2138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:21:19.312000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Sep 6 01:21:19.314496 systemd-networkd[1753]: docker0: Link UP Sep 6 01:21:19.342000 audit[2142]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_unregister_rule pid=2142 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:21:19.342000 audit[2142]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffdcfa7a30 a2=0 a3=1 items=0 ppid=1968 pid=2142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:21:19.342000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Sep 6 01:21:19.347000 audit[2143]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_rule pid=2143 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:21:19.347000 audit[2143]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=fffffca85ff0 a2=0 a3=1 items=0 ppid=1968 pid=2143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:21:19.347000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Sep 6 01:21:19.349566 env[1968]: time="2025-09-06T01:21:19.349532953Z" level=info msg="Loading containers: done." Sep 6 01:21:19.359568 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2191334447-merged.mount: Deactivated successfully. Sep 6 01:21:19.420335 env[1968]: time="2025-09-06T01:21:19.420281985Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 6 01:21:19.420665 env[1968]: time="2025-09-06T01:21:19.420649063Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Sep 6 01:21:19.420824 env[1968]: time="2025-09-06T01:21:19.420810022Z" level=info msg="Daemon has completed initialization" Sep 6 01:21:19.465375 systemd[1]: Started docker.service. Sep 6 01:21:19.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:21:19.472820 env[1968]: time="2025-09-06T01:21:19.472754762Z" level=info msg="API listen on /run/docker.sock" Sep 6 01:21:22.968387 env[1586]: time="2025-09-06T01:21:22.968096060Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 6 01:21:23.152110 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 6 01:21:23.152316 systemd[1]: Stopped kubelet.service. Sep 6 01:21:23.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:21:23.151000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:21:23.153772 systemd[1]: Starting kubelet.service... Sep 6 01:21:23.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:21:23.314426 systemd[1]: Started kubelet.service. Sep 6 01:21:23.369706 kubelet[2183]: E0906 01:21:23.369668 2183 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 01:21:23.370000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 6 01:21:23.371444 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 01:21:23.371577 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 01:21:24.372534 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3556454026.mount: Deactivated successfully. Sep 6 01:21:26.409152 env[1586]: time="2025-09-06T01:21:26.409107272Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:26.426683 env[1586]: time="2025-09-06T01:21:26.426641768Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:26.435341 env[1586]: time="2025-09-06T01:21:26.435306816Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:26.445105 env[1586]: time="2025-09-06T01:21:26.445055260Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:26.445984 env[1586]: time="2025-09-06T01:21:26.445953417Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\"" Sep 6 01:21:26.447886 env[1586]: time="2025-09-06T01:21:26.447864730Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 6 01:21:28.675356 env[1586]: time="2025-09-06T01:21:28.675301945Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:28.714024 env[1586]: time="2025-09-06T01:21:28.713978593Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:28.764126 env[1586]: time="2025-09-06T01:21:28.764084969Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:28.804153 env[1586]: time="2025-09-06T01:21:28.804098414Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:28.804962 env[1586]: time="2025-09-06T01:21:28.804932932Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\"" Sep 6 01:21:28.805622 env[1586]: time="2025-09-06T01:21:28.805599890Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 6 01:21:31.650662 env[1586]: time="2025-09-06T01:21:31.650594542Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:31.700907 env[1586]: time="2025-09-06T01:21:31.700844413Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:31.745501 env[1586]: time="2025-09-06T01:21:31.745453467Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:31.754715 env[1586]: time="2025-09-06T01:21:31.754681015Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:31.755484 env[1586]: time="2025-09-06T01:21:31.755458577Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\"" Sep 6 01:21:31.755987 env[1586]: time="2025-09-06T01:21:31.755964779Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 6 01:21:33.402115 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Sep 6 01:21:33.402300 systemd[1]: Stopped kubelet.service. Sep 6 01:21:33.427893 kernel: kauditd_printk_skb: 67 callbacks suppressed Sep 6 01:21:33.427999 kernel: audit: type=1130 audit(1757121693.401:211): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:21:33.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:21:33.403812 systemd[1]: Starting kubelet.service... Sep 6 01:21:33.401000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:21:33.448009 kernel: audit: type=1131 audit(1757121693.401:212): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:21:33.496560 systemd[1]: Started kubelet.service. Sep 6 01:21:33.495000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:21:33.515336 kernel: audit: type=1130 audit(1757121693.495:213): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:21:33.615724 kubelet[2198]: E0906 01:21:33.615673 2198 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 01:21:33.616000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 6 01:21:33.617390 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 01:21:33.617533 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 01:21:33.635275 kernel: audit: type=1131 audit(1757121693.616:214): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 6 01:21:38.317561 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3013030017.mount: Deactivated successfully. Sep 6 01:21:39.141282 env[1586]: time="2025-09-06T01:21:39.141220898Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:39.195995 env[1586]: time="2025-09-06T01:21:39.195940670Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:39.254563 env[1586]: time="2025-09-06T01:21:39.254489651Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:39.295865 env[1586]: time="2025-09-06T01:21:39.295815390Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:39.296482 env[1586]: time="2025-09-06T01:21:39.296454592Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\"" Sep 6 01:21:39.297069 env[1586]: time="2025-09-06T01:21:39.297044353Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 6 01:21:40.946169 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount792069946.mount: Deactivated successfully. Sep 6 01:21:43.652187 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Sep 6 01:21:43.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:21:43.652374 systemd[1]: Stopped kubelet.service. Sep 6 01:21:43.653810 systemd[1]: Starting kubelet.service... Sep 6 01:21:43.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:21:43.687553 kernel: audit: type=1130 audit(1757121703.651:215): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:21:43.687660 kernel: audit: type=1131 audit(1757121703.651:216): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:21:47.807731 systemd[1]: Started kubelet.service. Sep 6 01:21:47.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:21:47.830284 kernel: audit: type=1130 audit(1757121707.806:217): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:21:47.854320 kubelet[2213]: E0906 01:21:47.854282 2213 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 01:21:47.855000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 6 01:21:47.856204 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 01:21:47.856355 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 01:21:47.875272 kernel: audit: type=1131 audit(1757121707.855:218): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 6 01:21:50.786775 env[1586]: time="2025-09-06T01:21:50.786554938Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:50.795653 env[1586]: time="2025-09-06T01:21:50.795601234Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:50.801611 env[1586]: time="2025-09-06T01:21:50.801580925Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:50.809025 env[1586]: time="2025-09-06T01:21:50.808995858Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:50.809961 env[1586]: time="2025-09-06T01:21:50.809926340Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 6 01:21:50.811404 env[1586]: time="2025-09-06T01:21:50.811373543Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 6 01:21:51.690782 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2937882882.mount: Deactivated successfully. Sep 6 01:21:51.731795 env[1586]: time="2025-09-06T01:21:51.731740889Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:51.744591 env[1586]: time="2025-09-06T01:21:51.744549271Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:51.752169 env[1586]: time="2025-09-06T01:21:51.752117964Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:51.760970 env[1586]: time="2025-09-06T01:21:51.760935300Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:51.761525 env[1586]: time="2025-09-06T01:21:51.761498301Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 6 01:21:51.762531 env[1586]: time="2025-09-06T01:21:51.762501623Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 6 01:21:52.465952 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1133069742.mount: Deactivated successfully. Sep 6 01:21:54.865107 env[1586]: time="2025-09-06T01:21:54.865047274Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:54.880259 env[1586]: time="2025-09-06T01:21:54.880190499Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:54.886661 env[1586]: time="2025-09-06T01:21:54.886627030Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:54.895509 env[1586]: time="2025-09-06T01:21:54.895470084Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:54.896436 env[1586]: time="2025-09-06T01:21:54.896405086Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 6 01:21:57.902270 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Sep 6 01:21:57.902441 systemd[1]: Stopped kubelet.service. Sep 6 01:21:57.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:21:57.903915 systemd[1]: Starting kubelet.service... Sep 6 01:21:57.901000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:21:57.952965 kernel: audit: type=1130 audit(1757121717.901:219): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:21:57.953095 kernel: audit: type=1131 audit(1757121717.901:220): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:21:58.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:21:58.224217 systemd[1]: Started kubelet.service. Sep 6 01:21:58.249280 kernel: audit: type=1130 audit(1757121718.223:221): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:21:58.284787 kubelet[2244]: E0906 01:21:58.284738 2244 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 01:21:58.285000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 6 01:21:58.286457 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 01:21:58.286601 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 01:21:58.309263 kernel: audit: type=1131 audit(1757121718.285:222): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 6 01:22:00.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:00.481606 systemd[1]: Stopped kubelet.service. Sep 6 01:22:00.483737 systemd[1]: Starting kubelet.service... Sep 6 01:22:00.480000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:00.516805 kernel: audit: type=1130 audit(1757121720.480:223): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:00.517363 kernel: audit: type=1131 audit(1757121720.480:224): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:00.532074 systemd[1]: Reloading. Sep 6 01:22:00.591810 /usr/lib/systemd/system-generators/torcx-generator[2281]: time="2025-09-06T01:22:00Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 01:22:00.591841 /usr/lib/systemd/system-generators/torcx-generator[2281]: time="2025-09-06T01:22:00Z" level=info msg="torcx already run" Sep 6 01:22:00.686616 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 01:22:00.686637 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 01:22:00.702346 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 01:22:00.798738 systemd[1]: Started kubelet.service. Sep 6 01:22:00.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:00.817313 kernel: audit: type=1130 audit(1757121720.797:225): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:00.818180 systemd[1]: Stopping kubelet.service... Sep 6 01:22:00.819465 systemd[1]: kubelet.service: Deactivated successfully. Sep 6 01:22:00.819716 systemd[1]: Stopped kubelet.service. Sep 6 01:22:00.818000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:00.821800 systemd[1]: Starting kubelet.service... Sep 6 01:22:00.839254 kernel: audit: type=1131 audit(1757121720.818:226): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:01.014114 systemd[1]: Started kubelet.service. Sep 6 01:22:01.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:01.035311 kernel: audit: type=1130 audit(1757121721.013:227): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:01.179496 kubelet[2364]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 01:22:01.179865 kubelet[2364]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 6 01:22:01.179916 kubelet[2364]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 01:22:01.180084 kubelet[2364]: I0906 01:22:01.180054 2364 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 6 01:22:02.422709 kubelet[2364]: I0906 01:22:02.422671 2364 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 6 01:22:02.423065 kubelet[2364]: I0906 01:22:02.423052 2364 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 6 01:22:02.423401 kubelet[2364]: I0906 01:22:02.423383 2364 server.go:934] "Client rotation is on, will bootstrap in background" Sep 6 01:22:02.441398 kubelet[2364]: E0906 01:22:02.441360 2364 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.27:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.27:6443: connect: connection refused" logger="UnhandledError" Sep 6 01:22:02.442576 kubelet[2364]: I0906 01:22:02.442549 2364 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 01:22:02.449891 kubelet[2364]: E0906 01:22:02.449815 2364 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 6 01:22:02.450002 kubelet[2364]: I0906 01:22:02.449988 2364 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 6 01:22:02.454293 kubelet[2364]: I0906 01:22:02.454273 2364 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 6 01:22:02.455327 kubelet[2364]: I0906 01:22:02.455307 2364 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 6 01:22:02.455576 kubelet[2364]: I0906 01:22:02.455550 2364 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 6 01:22:02.455812 kubelet[2364]: I0906 01:22:02.455643 2364 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-34c19deec5","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 6 01:22:02.455948 kubelet[2364]: I0906 01:22:02.455936 2364 topology_manager.go:138] "Creating topology manager with none policy" Sep 6 01:22:02.456005 kubelet[2364]: I0906 01:22:02.455996 2364 container_manager_linux.go:300] "Creating device plugin manager" Sep 6 01:22:02.456169 kubelet[2364]: I0906 01:22:02.456154 2364 state_mem.go:36] "Initialized new in-memory state store" Sep 6 01:22:02.458820 kubelet[2364]: I0906 01:22:02.458798 2364 kubelet.go:408] "Attempting to sync node with API server" Sep 6 01:22:02.458933 kubelet[2364]: I0906 01:22:02.458921 2364 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 6 01:22:02.459006 kubelet[2364]: I0906 01:22:02.458996 2364 kubelet.go:314] "Adding apiserver pod source" Sep 6 01:22:02.459065 kubelet[2364]: I0906 01:22:02.459056 2364 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 6 01:22:02.468735 kubelet[2364]: W0906 01:22:02.468628 2364 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.27:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-34c19deec5&limit=500&resourceVersion=0": dial tcp 10.200.20.27:6443: connect: connection refused Sep 6 01:22:02.468735 kubelet[2364]: E0906 01:22:02.468699 2364 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.27:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-34c19deec5&limit=500&resourceVersion=0\": dial tcp 10.200.20.27:6443: connect: connection refused" logger="UnhandledError" Sep 6 01:22:02.469075 kubelet[2364]: W0906 01:22:02.469031 2364 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.27:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.27:6443: connect: connection refused Sep 6 01:22:02.469136 kubelet[2364]: E0906 01:22:02.469076 2364 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.27:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.27:6443: connect: connection refused" logger="UnhandledError" Sep 6 01:22:02.469167 kubelet[2364]: I0906 01:22:02.469150 2364 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 6 01:22:02.469611 kubelet[2364]: I0906 01:22:02.469589 2364 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 6 01:22:02.469667 kubelet[2364]: W0906 01:22:02.469635 2364 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 6 01:22:02.470343 kubelet[2364]: I0906 01:22:02.470324 2364 server.go:1274] "Started kubelet" Sep 6 01:22:02.476000 audit[2364]: AVC avc: denied { mac_admin } for pid=2364 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:02.479321 kubelet[2364]: I0906 01:22:02.479299 2364 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Sep 6 01:22:02.479443 kubelet[2364]: I0906 01:22:02.479428 2364 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Sep 6 01:22:02.479599 kubelet[2364]: I0906 01:22:02.479586 2364 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 6 01:22:02.487776 kubelet[2364]: I0906 01:22:02.487741 2364 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 6 01:22:02.488744 kubelet[2364]: I0906 01:22:02.488726 2364 server.go:449] "Adding debug handlers to kubelet server" Sep 6 01:22:02.489814 kubelet[2364]: I0906 01:22:02.489763 2364 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 6 01:22:02.490109 kubelet[2364]: I0906 01:22:02.490094 2364 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 6 01:22:02.490432 kubelet[2364]: I0906 01:22:02.490412 2364 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 6 01:22:02.491869 kubelet[2364]: I0906 01:22:02.491848 2364 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 6 01:22:02.492186 kubelet[2364]: E0906 01:22:02.492164 2364 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-34c19deec5\" not found" Sep 6 01:22:02.494641 kubelet[2364]: I0906 01:22:02.494621 2364 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 6 01:22:02.494786 kubelet[2364]: I0906 01:22:02.494776 2364 reconciler.go:26] "Reconciler: start to sync state" Sep 6 01:22:02.476000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 6 01:22:02.476000 audit[2364]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000856b70 a1=4000a6a8e8 a2=4000856b40 a3=25 items=0 ppid=1 pid=2364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:02.476000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 6 01:22:02.477000 audit[2364]: AVC avc: denied { mac_admin } for pid=2364 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:02.477000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 6 01:22:02.477000 audit[2364]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=40008d3240 a1=4000a6a900 a2=4000856c00 a3=25 items=0 ppid=1 pid=2364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:02.477000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 6 01:22:02.482000 audit[2375]: NETFILTER_CFG table=mangle:29 family=2 entries=2 op=nft_register_chain pid=2375 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:22:02.482000 audit[2375]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffe679daa0 a2=0 a3=1 items=0 ppid=2364 pid=2375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:02.482000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Sep 6 01:22:02.482000 audit[2376]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_chain pid=2376 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:22:02.482000 audit[2376]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd81e2890 a2=0 a3=1 items=0 ppid=2364 pid=2376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:02.496295 kernel: audit: type=1400 audit(1757121722.476:228): avc: denied { mac_admin } for pid=2364 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:02.482000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Sep 6 01:22:02.492000 audit[2378]: NETFILTER_CFG table=filter:31 family=2 entries=2 op=nft_register_chain pid=2378 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:22:02.492000 audit[2378]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffc4e938a0 a2=0 a3=1 items=0 ppid=2364 pid=2378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:02.492000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 6 01:22:02.492000 audit[2380]: NETFILTER_CFG table=filter:32 family=2 entries=2 op=nft_register_chain pid=2380 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:22:02.492000 audit[2380]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffee2833b0 a2=0 a3=1 items=0 ppid=2364 pid=2380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:02.492000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 6 01:22:02.497188 kubelet[2364]: E0906 01:22:02.478419 2364 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.27:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.27:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.8-n-34c19deec5.18628ce59ef70865 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.8-n-34c19deec5,UID:ci-3510.3.8-n-34c19deec5,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.8-n-34c19deec5,},FirstTimestamp:2025-09-06 01:22:02.470303845 +0000 UTC m=+1.435474960,LastTimestamp:2025-09-06 01:22:02.470303845 +0000 UTC m=+1.435474960,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.8-n-34c19deec5,}" Sep 6 01:22:02.498560 kubelet[2364]: E0906 01:22:02.498530 2364 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-34c19deec5?timeout=10s\": dial tcp 10.200.20.27:6443: connect: connection refused" interval="200ms" Sep 6 01:22:02.499143 kubelet[2364]: I0906 01:22:02.499126 2364 factory.go:221] Registration of the systemd container factory successfully Sep 6 01:22:02.499484 kubelet[2364]: I0906 01:22:02.499466 2364 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 6 01:22:02.502210 kubelet[2364]: I0906 01:22:02.502193 2364 factory.go:221] Registration of the containerd container factory successfully Sep 6 01:22:02.506213 kubelet[2364]: W0906 01:22:02.506174 2364 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.27:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.27:6443: connect: connection refused Sep 6 01:22:02.506459 kubelet[2364]: E0906 01:22:02.506438 2364 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.27:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.27:6443: connect: connection refused" logger="UnhandledError" Sep 6 01:22:02.544000 audit[2386]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=2386 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:22:02.544000 audit[2386]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffee1506b0 a2=0 a3=1 items=0 ppid=2364 pid=2386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:02.544000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Sep 6 01:22:02.546460 kubelet[2364]: I0906 01:22:02.546423 2364 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 6 01:22:02.546000 audit[2388]: NETFILTER_CFG table=mangle:34 family=10 entries=2 op=nft_register_chain pid=2388 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 01:22:02.546000 audit[2388]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffcf6404e0 a2=0 a3=1 items=0 ppid=2364 pid=2388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:02.546000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Sep 6 01:22:02.546000 audit[2389]: NETFILTER_CFG table=mangle:35 family=2 entries=1 op=nft_register_chain pid=2389 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:22:02.546000 audit[2389]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffff431e90 a2=0 a3=1 items=0 ppid=2364 pid=2389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:02.546000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Sep 6 01:22:02.548113 kubelet[2364]: I0906 01:22:02.548093 2364 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 6 01:22:02.548205 kubelet[2364]: I0906 01:22:02.548194 2364 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 6 01:22:02.548299 kubelet[2364]: I0906 01:22:02.548289 2364 kubelet.go:2321] "Starting kubelet main sync loop" Sep 6 01:22:02.547000 audit[2392]: NETFILTER_CFG table=nat:36 family=2 entries=1 op=nft_register_chain pid=2392 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:22:02.547000 audit[2392]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe625b0b0 a2=0 a3=1 items=0 ppid=2364 pid=2392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:02.547000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Sep 6 01:22:02.549535 kubelet[2364]: E0906 01:22:02.549510 2364 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 6 01:22:02.550012 kubelet[2364]: W0906 01:22:02.549989 2364 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.27:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.27:6443: connect: connection refused Sep 6 01:22:02.550133 kubelet[2364]: E0906 01:22:02.550114 2364 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.27:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.27:6443: connect: connection refused" logger="UnhandledError" Sep 6 01:22:02.549000 audit[2393]: NETFILTER_CFG table=mangle:37 family=10 entries=1 op=nft_register_chain pid=2393 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 01:22:02.549000 audit[2393]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc08b1250 a2=0 a3=1 items=0 ppid=2364 pid=2393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:02.549000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Sep 6 01:22:02.549000 audit[2394]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_chain pid=2394 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:22:02.549000 audit[2394]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcda109a0 a2=0 a3=1 items=0 ppid=2364 pid=2394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:02.549000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Sep 6 01:22:02.550000 audit[2395]: NETFILTER_CFG table=nat:39 family=10 entries=2 op=nft_register_chain pid=2395 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 01:22:02.550000 audit[2395]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=128 a0=3 a1=ffffc673cc10 a2=0 a3=1 items=0 ppid=2364 pid=2395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:02.550000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Sep 6 01:22:02.551000 audit[2396]: NETFILTER_CFG table=filter:40 family=10 entries=2 op=nft_register_chain pid=2396 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 01:22:02.551000 audit[2396]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffcdc7fc90 a2=0 a3=1 items=0 ppid=2364 pid=2396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:02.551000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Sep 6 01:22:02.592719 kubelet[2364]: E0906 01:22:02.592685 2364 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-34c19deec5\" not found" Sep 6 01:22:02.627834 kubelet[2364]: I0906 01:22:02.627787 2364 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 6 01:22:02.627966 kubelet[2364]: I0906 01:22:02.627952 2364 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 6 01:22:02.628023 kubelet[2364]: I0906 01:22:02.628015 2364 state_mem.go:36] "Initialized new in-memory state store" Sep 6 01:22:02.640167 kubelet[2364]: I0906 01:22:02.640147 2364 policy_none.go:49] "None policy: Start" Sep 6 01:22:02.640929 kubelet[2364]: I0906 01:22:02.640913 2364 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 6 01:22:02.641029 kubelet[2364]: I0906 01:22:02.641019 2364 state_mem.go:35] "Initializing new in-memory state store" Sep 6 01:22:02.652388 kubelet[2364]: E0906 01:22:02.652361 2364 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 6 01:22:02.653774 kubelet[2364]: I0906 01:22:02.653742 2364 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 6 01:22:02.652000 audit[2364]: AVC avc: denied { mac_admin } for pid=2364 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:02.652000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 6 01:22:02.652000 audit[2364]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000923560 a1=4000975f50 a2=4000923530 a3=25 items=0 ppid=1 pid=2364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:02.652000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 6 01:22:02.654002 kubelet[2364]: I0906 01:22:02.653820 2364 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Sep 6 01:22:02.654002 kubelet[2364]: I0906 01:22:02.653934 2364 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 6 01:22:02.654002 kubelet[2364]: I0906 01:22:02.653944 2364 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 6 01:22:02.655334 kubelet[2364]: I0906 01:22:02.655310 2364 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 6 01:22:02.656392 kubelet[2364]: E0906 01:22:02.656373 2364 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.8-n-34c19deec5\" not found" Sep 6 01:22:02.701922 kubelet[2364]: E0906 01:22:02.699733 2364 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-34c19deec5?timeout=10s\": dial tcp 10.200.20.27:6443: connect: connection refused" interval="400ms" Sep 6 01:22:02.756415 kubelet[2364]: I0906 01:22:02.756368 2364 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-34c19deec5" Sep 6 01:22:02.756881 kubelet[2364]: E0906 01:22:02.756858 2364 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.27:6443/api/v1/nodes\": dial tcp 10.200.20.27:6443: connect: connection refused" node="ci-3510.3.8-n-34c19deec5" Sep 6 01:22:02.896384 kubelet[2364]: I0906 01:22:02.896350 2364 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3c29fedbff69de6afbfcd353a058263d-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-34c19deec5\" (UID: \"3c29fedbff69de6afbfcd353a058263d\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-34c19deec5" Sep 6 01:22:02.896585 kubelet[2364]: I0906 01:22:02.896566 2364 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1bb0f18dccac4472bb4caf530ad6f41d-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-34c19deec5\" (UID: \"1bb0f18dccac4472bb4caf530ad6f41d\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-34c19deec5" Sep 6 01:22:02.896674 kubelet[2364]: I0906 01:22:02.896659 2364 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1bb0f18dccac4472bb4caf530ad6f41d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-34c19deec5\" (UID: \"1bb0f18dccac4472bb4caf530ad6f41d\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-34c19deec5" Sep 6 01:22:02.896759 kubelet[2364]: I0906 01:22:02.896744 2364 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4834508a2b49ebd6ebe17c22e93e874d-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-34c19deec5\" (UID: \"4834508a2b49ebd6ebe17c22e93e874d\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-34c19deec5" Sep 6 01:22:02.896844 kubelet[2364]: I0906 01:22:02.896830 2364 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4834508a2b49ebd6ebe17c22e93e874d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-34c19deec5\" (UID: \"4834508a2b49ebd6ebe17c22e93e874d\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-34c19deec5" Sep 6 01:22:02.896937 kubelet[2364]: I0906 01:22:02.896924 2364 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1bb0f18dccac4472bb4caf530ad6f41d-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-34c19deec5\" (UID: \"1bb0f18dccac4472bb4caf530ad6f41d\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-34c19deec5" Sep 6 01:22:02.897021 kubelet[2364]: I0906 01:22:02.897008 2364 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4834508a2b49ebd6ebe17c22e93e874d-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-34c19deec5\" (UID: \"4834508a2b49ebd6ebe17c22e93e874d\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-34c19deec5" Sep 6 01:22:02.897100 kubelet[2364]: I0906 01:22:02.897088 2364 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4834508a2b49ebd6ebe17c22e93e874d-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-34c19deec5\" (UID: \"4834508a2b49ebd6ebe17c22e93e874d\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-34c19deec5" Sep 6 01:22:02.897184 kubelet[2364]: I0906 01:22:02.897172 2364 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4834508a2b49ebd6ebe17c22e93e874d-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-34c19deec5\" (UID: \"4834508a2b49ebd6ebe17c22e93e874d\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-34c19deec5" Sep 6 01:22:02.959333 kubelet[2364]: I0906 01:22:02.958945 2364 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-34c19deec5" Sep 6 01:22:02.959562 kubelet[2364]: E0906 01:22:02.959517 2364 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.27:6443/api/v1/nodes\": dial tcp 10.200.20.27:6443: connect: connection refused" node="ci-3510.3.8-n-34c19deec5" Sep 6 01:22:03.100941 kubelet[2364]: E0906 01:22:03.100886 2364 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-34c19deec5?timeout=10s\": dial tcp 10.200.20.27:6443: connect: connection refused" interval="800ms" Sep 6 01:22:03.160165 env[1586]: time="2025-09-06T01:22:03.159897286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-34c19deec5,Uid:1bb0f18dccac4472bb4caf530ad6f41d,Namespace:kube-system,Attempt:0,}" Sep 6 01:22:03.164218 env[1586]: time="2025-09-06T01:22:03.164182292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-34c19deec5,Uid:4834508a2b49ebd6ebe17c22e93e874d,Namespace:kube-system,Attempt:0,}" Sep 6 01:22:03.164799 env[1586]: time="2025-09-06T01:22:03.164767333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-34c19deec5,Uid:3c29fedbff69de6afbfcd353a058263d,Namespace:kube-system,Attempt:0,}" Sep 6 01:22:03.283037 kubelet[2364]: W0906 01:22:03.282719 2364 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.27:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.27:6443: connect: connection refused Sep 6 01:22:03.283037 kubelet[2364]: E0906 01:22:03.282792 2364 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.27:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.27:6443: connect: connection refused" logger="UnhandledError" Sep 6 01:22:03.362069 kubelet[2364]: I0906 01:22:03.361714 2364 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-34c19deec5" Sep 6 01:22:03.362069 kubelet[2364]: E0906 01:22:03.362037 2364 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.27:6443/api/v1/nodes\": dial tcp 10.200.20.27:6443: connect: connection refused" node="ci-3510.3.8-n-34c19deec5" Sep 6 01:22:03.664615 kubelet[2364]: W0906 01:22:03.664555 2364 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.27:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-34c19deec5&limit=500&resourceVersion=0": dial tcp 10.200.20.27:6443: connect: connection refused Sep 6 01:22:03.664989 kubelet[2364]: E0906 01:22:03.664624 2364 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.27:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-34c19deec5&limit=500&resourceVersion=0\": dial tcp 10.200.20.27:6443: connect: connection refused" logger="UnhandledError" Sep 6 01:22:03.791773 kubelet[2364]: W0906 01:22:03.791713 2364 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.27:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.27:6443: connect: connection refused Sep 6 01:22:03.791917 kubelet[2364]: E0906 01:22:03.791778 2364 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.27:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.27:6443: connect: connection refused" logger="UnhandledError" Sep 6 01:22:03.843917 kubelet[2364]: W0906 01:22:03.843834 2364 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.27:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.27:6443: connect: connection refused Sep 6 01:22:03.843917 kubelet[2364]: E0906 01:22:03.843882 2364 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.27:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.27:6443: connect: connection refused" logger="UnhandledError" Sep 6 01:22:03.901702 kubelet[2364]: E0906 01:22:03.901652 2364 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-34c19deec5?timeout=10s\": dial tcp 10.200.20.27:6443: connect: connection refused" interval="1.6s" Sep 6 01:22:03.925902 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4208470689.mount: Deactivated successfully. Sep 6 01:22:03.980059 env[1586]: time="2025-09-06T01:22:03.980004043Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:22:04.035936 env[1586]: time="2025-09-06T01:22:04.035867755Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:22:04.045067 env[1586]: time="2025-09-06T01:22:04.045029447Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:22:04.068948 env[1586]: time="2025-09-06T01:22:04.068888157Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:22:04.088097 env[1586]: time="2025-09-06T01:22:04.088067342Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:22:04.101110 env[1586]: time="2025-09-06T01:22:04.101071839Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:22:04.113368 env[1586]: time="2025-09-06T01:22:04.113328934Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:22:04.129533 env[1586]: time="2025-09-06T01:22:04.129498315Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:22:04.146401 env[1586]: time="2025-09-06T01:22:04.146362697Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:22:04.164200 kubelet[2364]: I0906 01:22:04.164140 2364 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-34c19deec5" Sep 6 01:22:04.164525 kubelet[2364]: E0906 01:22:04.164492 2364 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.27:6443/api/v1/nodes\": dial tcp 10.200.20.27:6443: connect: connection refused" node="ci-3510.3.8-n-34c19deec5" Sep 6 01:22:04.171445 env[1586]: time="2025-09-06T01:22:04.171416089Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:22:04.175936 env[1586]: time="2025-09-06T01:22:04.175894175Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:22:04.191588 env[1586]: time="2025-09-06T01:22:04.191484595Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:22:04.247450 env[1586]: time="2025-09-06T01:22:04.247371906Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:22:04.247450 env[1586]: time="2025-09-06T01:22:04.247423626Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:22:04.247666 env[1586]: time="2025-09-06T01:22:04.247434346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:22:04.247666 env[1586]: time="2025-09-06T01:22:04.247587027Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9132abe6a9f84ba081d00b9aa8f1898b574125dd90b2d2aaf17b99c238c94f55 pid=2405 runtime=io.containerd.runc.v2 Sep 6 01:22:04.291511 env[1586]: time="2025-09-06T01:22:04.291463403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-34c19deec5,Uid:4834508a2b49ebd6ebe17c22e93e874d,Namespace:kube-system,Attempt:0,} returns sandbox id \"9132abe6a9f84ba081d00b9aa8f1898b574125dd90b2d2aaf17b99c238c94f55\"" Sep 6 01:22:04.294164 env[1586]: time="2025-09-06T01:22:04.294128926Z" level=info msg="CreateContainer within sandbox \"9132abe6a9f84ba081d00b9aa8f1898b574125dd90b2d2aaf17b99c238c94f55\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 6 01:22:04.304167 env[1586]: time="2025-09-06T01:22:04.304096579Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:22:04.304305 env[1586]: time="2025-09-06T01:22:04.304146619Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:22:04.304305 env[1586]: time="2025-09-06T01:22:04.304157539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:22:04.304403 env[1586]: time="2025-09-06T01:22:04.304341299Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/eecb1f7dd576c60dd14ef090819781f3633947b92204dd89754298c652fcc2a0 pid=2446 runtime=io.containerd.runc.v2 Sep 6 01:22:04.331599 env[1586]: time="2025-09-06T01:22:04.330879813Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:22:04.331599 env[1586]: time="2025-09-06T01:22:04.330966294Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:22:04.331599 env[1586]: time="2025-09-06T01:22:04.330993854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:22:04.331599 env[1586]: time="2025-09-06T01:22:04.331125534Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/601773794a02f341d6346111378f36fc0946366a08a669b41ba8f357e896270a pid=2482 runtime=io.containerd.runc.v2 Sep 6 01:22:04.342610 env[1586]: time="2025-09-06T01:22:04.342547508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-34c19deec5,Uid:3c29fedbff69de6afbfcd353a058263d,Namespace:kube-system,Attempt:0,} returns sandbox id \"eecb1f7dd576c60dd14ef090819781f3633947b92204dd89754298c652fcc2a0\"" Sep 6 01:22:04.344789 env[1586]: time="2025-09-06T01:22:04.344757391Z" level=info msg="CreateContainer within sandbox \"eecb1f7dd576c60dd14ef090819781f3633947b92204dd89754298c652fcc2a0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 6 01:22:04.379173 env[1586]: time="2025-09-06T01:22:04.379129275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-34c19deec5,Uid:1bb0f18dccac4472bb4caf530ad6f41d,Namespace:kube-system,Attempt:0,} returns sandbox id \"601773794a02f341d6346111378f36fc0946366a08a669b41ba8f357e896270a\"" Sep 6 01:22:04.381208 env[1586]: time="2025-09-06T01:22:04.381181158Z" level=info msg="CreateContainer within sandbox \"601773794a02f341d6346111378f36fc0946366a08a669b41ba8f357e896270a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 6 01:22:04.414905 env[1586]: time="2025-09-06T01:22:04.414838401Z" level=info msg="CreateContainer within sandbox \"9132abe6a9f84ba081d00b9aa8f1898b574125dd90b2d2aaf17b99c238c94f55\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"53eca53e23d17a0f47e641f0bbca473d2b398bdd8d0862a4fc79e3fd9752a83f\"" Sep 6 01:22:04.415630 env[1586]: time="2025-09-06T01:22:04.415603442Z" level=info msg="StartContainer for \"53eca53e23d17a0f47e641f0bbca473d2b398bdd8d0862a4fc79e3fd9752a83f\"" Sep 6 01:22:04.463289 kubelet[2364]: E0906 01:22:04.463177 2364 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.27:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.27:6443: connect: connection refused" logger="UnhandledError" Sep 6 01:22:04.484389 env[1586]: time="2025-09-06T01:22:04.484340130Z" level=info msg="StartContainer for \"53eca53e23d17a0f47e641f0bbca473d2b398bdd8d0862a4fc79e3fd9752a83f\" returns successfully" Sep 6 01:22:04.503366 env[1586]: time="2025-09-06T01:22:04.503312435Z" level=info msg="CreateContainer within sandbox \"eecb1f7dd576c60dd14ef090819781f3633947b92204dd89754298c652fcc2a0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"36e3f9efd99481c69c8d2245711f2f67cd896ed479a7c0739005cceae8df0437\"" Sep 6 01:22:04.503799 env[1586]: time="2025-09-06T01:22:04.503772555Z" level=info msg="StartContainer for \"36e3f9efd99481c69c8d2245711f2f67cd896ed479a7c0739005cceae8df0437\"" Sep 6 01:22:04.525278 env[1586]: time="2025-09-06T01:22:04.525001982Z" level=info msg="CreateContainer within sandbox \"601773794a02f341d6346111378f36fc0946366a08a669b41ba8f357e896270a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7003e08fee44c822932c2039ac672af70bd2064dd758cda47ceb756115427562\"" Sep 6 01:22:04.525618 env[1586]: time="2025-09-06T01:22:04.525590583Z" level=info msg="StartContainer for \"7003e08fee44c822932c2039ac672af70bd2064dd758cda47ceb756115427562\"" Sep 6 01:22:04.588037 env[1586]: time="2025-09-06T01:22:04.587983223Z" level=info msg="StartContainer for \"7003e08fee44c822932c2039ac672af70bd2064dd758cda47ceb756115427562\" returns successfully" Sep 6 01:22:04.637699 env[1586]: time="2025-09-06T01:22:04.637655207Z" level=info msg="StartContainer for \"36e3f9efd99481c69c8d2245711f2f67cd896ed479a7c0739005cceae8df0437\" returns successfully" Sep 6 01:22:05.766724 kubelet[2364]: I0906 01:22:05.766681 2364 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-34c19deec5" Sep 6 01:22:06.875585 kubelet[2364]: E0906 01:22:06.875554 2364 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.8-n-34c19deec5\" not found" node="ci-3510.3.8-n-34c19deec5" Sep 6 01:22:06.999496 kubelet[2364]: I0906 01:22:06.999450 2364 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510.3.8-n-34c19deec5" Sep 6 01:22:06.999496 kubelet[2364]: E0906 01:22:06.999493 2364 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-3510.3.8-n-34c19deec5\": node \"ci-3510.3.8-n-34c19deec5\" not found" Sep 6 01:22:07.475627 kubelet[2364]: I0906 01:22:07.475591 2364 apiserver.go:52] "Watching apiserver" Sep 6 01:22:07.495866 kubelet[2364]: I0906 01:22:07.495836 2364 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 6 01:22:07.598979 kubelet[2364]: E0906 01:22:07.598938 2364 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.8-n-34c19deec5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510.3.8-n-34c19deec5" Sep 6 01:22:08.194780 kubelet[2364]: W0906 01:22:08.194744 2364 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 6 01:22:09.593403 systemd[1]: Reloading. Sep 6 01:22:09.665917 /usr/lib/systemd/system-generators/torcx-generator[2657]: time="2025-09-06T01:22:09Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 01:22:09.665946 /usr/lib/systemd/system-generators/torcx-generator[2657]: time="2025-09-06T01:22:09Z" level=info msg="torcx already run" Sep 6 01:22:09.722656 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 01:22:09.722827 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 01:22:09.738499 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 01:22:09.841050 systemd[1]: Stopping kubelet.service... Sep 6 01:22:09.866857 systemd[1]: kubelet.service: Deactivated successfully. Sep 6 01:22:09.867143 systemd[1]: Stopped kubelet.service. Sep 6 01:22:09.865000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:09.868909 systemd[1]: Starting kubelet.service... Sep 6 01:22:09.872100 kernel: kauditd_printk_skb: 47 callbacks suppressed Sep 6 01:22:09.872151 kernel: audit: type=1131 audit(1757121729.865:243): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:10.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:10.101183 systemd[1]: Started kubelet.service. Sep 6 01:22:10.119293 kernel: audit: type=1130 audit(1757121730.100:244): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:10.156902 kubelet[2729]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 01:22:10.156902 kubelet[2729]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 6 01:22:10.156902 kubelet[2729]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 01:22:10.156902 kubelet[2729]: I0906 01:22:10.156582 2729 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 6 01:22:10.162549 kubelet[2729]: I0906 01:22:10.162519 2729 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 6 01:22:10.162549 kubelet[2729]: I0906 01:22:10.162546 2729 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 6 01:22:10.162748 kubelet[2729]: I0906 01:22:10.162729 2729 server.go:934] "Client rotation is on, will bootstrap in background" Sep 6 01:22:10.164028 kubelet[2729]: I0906 01:22:10.163991 2729 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 6 01:22:10.166948 kubelet[2729]: I0906 01:22:10.166926 2729 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 01:22:10.174072 kubelet[2729]: E0906 01:22:10.174036 2729 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 6 01:22:10.174072 kubelet[2729]: I0906 01:22:10.174066 2729 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 6 01:22:10.178486 kubelet[2729]: I0906 01:22:10.178113 2729 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 6 01:22:10.178856 kubelet[2729]: I0906 01:22:10.178837 2729 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 6 01:22:10.178970 kubelet[2729]: I0906 01:22:10.178938 2729 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 6 01:22:10.179130 kubelet[2729]: I0906 01:22:10.178968 2729 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-34c19deec5","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 6 01:22:10.179205 kubelet[2729]: I0906 01:22:10.179133 2729 topology_manager.go:138] "Creating topology manager with none policy" Sep 6 01:22:10.179205 kubelet[2729]: I0906 01:22:10.179143 2729 container_manager_linux.go:300] "Creating device plugin manager" Sep 6 01:22:10.179205 kubelet[2729]: I0906 01:22:10.179176 2729 state_mem.go:36] "Initialized new in-memory state store" Sep 6 01:22:10.179297 kubelet[2729]: I0906 01:22:10.179291 2729 kubelet.go:408] "Attempting to sync node with API server" Sep 6 01:22:10.179328 kubelet[2729]: I0906 01:22:10.179308 2729 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 6 01:22:10.179353 kubelet[2729]: I0906 01:22:10.179331 2729 kubelet.go:314] "Adding apiserver pod source" Sep 6 01:22:10.179353 kubelet[2729]: I0906 01:22:10.179343 2729 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 6 01:22:10.193000 audit[2729]: AVC avc: denied { mac_admin } for pid=2729 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:10.200417 kubelet[2729]: I0906 01:22:10.184687 2729 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 6 01:22:10.200417 kubelet[2729]: I0906 01:22:10.185916 2729 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 6 01:22:10.200417 kubelet[2729]: I0906 01:22:10.187803 2729 server.go:1274] "Started kubelet" Sep 6 01:22:10.200417 kubelet[2729]: I0906 01:22:10.187998 2729 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 6 01:22:10.200417 kubelet[2729]: I0906 01:22:10.188208 2729 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 6 01:22:10.200417 kubelet[2729]: I0906 01:22:10.188868 2729 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 6 01:22:10.200417 kubelet[2729]: I0906 01:22:10.190136 2729 server.go:449] "Adding debug handlers to kubelet server" Sep 6 01:22:10.205332 kubelet[2729]: I0906 01:22:10.205305 2729 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Sep 6 01:22:10.205465 kubelet[2729]: I0906 01:22:10.205452 2729 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Sep 6 01:22:10.205546 kubelet[2729]: I0906 01:22:10.205537 2729 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 6 01:22:10.213496 kubelet[2729]: I0906 01:22:10.213478 2729 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 6 01:22:10.214734 kubelet[2729]: I0906 01:22:10.214720 2729 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 6 01:22:10.214973 kubelet[2729]: E0906 01:22:10.214955 2729 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-34c19deec5\" not found" Sep 6 01:22:10.215629 kubelet[2729]: I0906 01:22:10.215614 2729 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 6 01:22:10.215827 kubelet[2729]: I0906 01:22:10.215818 2729 reconciler.go:26] "Reconciler: start to sync state" Sep 6 01:22:10.193000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 6 01:22:10.225945 kernel: audit: type=1400 audit(1757121730.193:245): avc: denied { mac_admin } for pid=2729 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:10.226017 kernel: audit: type=1401 audit(1757121730.193:245): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 6 01:22:10.193000 audit[2729]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000a3c660 a1=400086d680 a2=4000a3c630 a3=25 items=0 ppid=1 pid=2729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:10.252375 kernel: audit: type=1300 audit(1757121730.193:245): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000a3c660 a1=400086d680 a2=4000a3c630 a3=25 items=0 ppid=1 pid=2729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:10.193000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 6 01:22:10.259613 kubelet[2729]: I0906 01:22:10.259589 2729 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 6 01:22:10.269220 kubelet[2729]: I0906 01:22:10.269182 2729 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 6 01:22:10.270157 kubelet[2729]: I0906 01:22:10.270140 2729 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 6 01:22:10.270271 kubelet[2729]: I0906 01:22:10.270260 2729 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 6 01:22:10.270347 kubelet[2729]: I0906 01:22:10.270338 2729 kubelet.go:2321] "Starting kubelet main sync loop" Sep 6 01:22:10.270456 kubelet[2729]: E0906 01:22:10.270438 2729 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 6 01:22:10.278629 kernel: audit: type=1327 audit(1757121730.193:245): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 6 01:22:10.204000 audit[2729]: AVC avc: denied { mac_admin } for pid=2729 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:10.299444 kernel: audit: type=1400 audit(1757121730.204:246): avc: denied { mac_admin } for pid=2729 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:10.204000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 6 01:22:10.300552 kubelet[2729]: I0906 01:22:10.300532 2729 factory.go:221] Registration of the containerd container factory successfully Sep 6 01:22:10.300656 kubelet[2729]: I0906 01:22:10.300643 2729 factory.go:221] Registration of the systemd container factory successfully Sep 6 01:22:10.309677 kernel: audit: type=1401 audit(1757121730.204:246): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 6 01:22:10.310583 kernel: audit: type=1300 audit(1757121730.204:246): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000956f80 a1=400086d698 a2=4000a3c6f0 a3=25 items=0 ppid=1 pid=2729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:10.204000 audit[2729]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000956f80 a1=400086d698 a2=4000a3c6f0 a3=25 items=0 ppid=1 pid=2729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:10.204000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 6 01:22:10.361747 kernel: audit: type=1327 audit(1757121730.204:246): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 6 01:22:10.370627 kubelet[2729]: E0906 01:22:10.370608 2729 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 6 01:22:10.393061 kubelet[2729]: I0906 01:22:10.393030 2729 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 6 01:22:10.393061 kubelet[2729]: I0906 01:22:10.393051 2729 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 6 01:22:10.393207 kubelet[2729]: I0906 01:22:10.393071 2729 state_mem.go:36] "Initialized new in-memory state store" Sep 6 01:22:10.393236 kubelet[2729]: I0906 01:22:10.393216 2729 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 6 01:22:10.393288 kubelet[2729]: I0906 01:22:10.393233 2729 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 6 01:22:10.393288 kubelet[2729]: I0906 01:22:10.393277 2729 policy_none.go:49] "None policy: Start" Sep 6 01:22:10.393855 kubelet[2729]: I0906 01:22:10.393829 2729 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 6 01:22:10.393855 kubelet[2729]: I0906 01:22:10.393854 2729 state_mem.go:35] "Initializing new in-memory state store" Sep 6 01:22:10.394006 kubelet[2729]: I0906 01:22:10.393987 2729 state_mem.go:75] "Updated machine memory state" Sep 6 01:22:10.395090 kubelet[2729]: I0906 01:22:10.395066 2729 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 6 01:22:10.393000 audit[2729]: AVC avc: denied { mac_admin } for pid=2729 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:10.393000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 6 01:22:10.393000 audit[2729]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=40009d0090 a1=40010ff368 a2=40009d0060 a3=25 items=0 ppid=1 pid=2729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:10.393000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 6 01:22:10.395306 kubelet[2729]: I0906 01:22:10.395150 2729 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Sep 6 01:22:10.395340 kubelet[2729]: I0906 01:22:10.395309 2729 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 6 01:22:10.395340 kubelet[2729]: I0906 01:22:10.395320 2729 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 6 01:22:10.398077 kubelet[2729]: I0906 01:22:10.396982 2729 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 6 01:22:10.502013 kubelet[2729]: I0906 01:22:10.501925 2729 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-34c19deec5" Sep 6 01:22:10.515308 kubelet[2729]: I0906 01:22:10.515274 2729 kubelet_node_status.go:111] "Node was previously registered" node="ci-3510.3.8-n-34c19deec5" Sep 6 01:22:10.515430 kubelet[2729]: I0906 01:22:10.515363 2729 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510.3.8-n-34c19deec5" Sep 6 01:22:10.587757 kubelet[2729]: W0906 01:22:10.587572 2729 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 6 01:22:10.592316 kubelet[2729]: W0906 01:22:10.592291 2729 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 6 01:22:10.592866 kubelet[2729]: W0906 01:22:10.592853 2729 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 6 01:22:10.592993 kubelet[2729]: E0906 01:22:10.592977 2729 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.8-n-34c19deec5\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.8-n-34c19deec5" Sep 6 01:22:10.617681 kubelet[2729]: I0906 01:22:10.617616 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4834508a2b49ebd6ebe17c22e93e874d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-34c19deec5\" (UID: \"4834508a2b49ebd6ebe17c22e93e874d\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-34c19deec5" Sep 6 01:22:10.617800 kubelet[2729]: I0906 01:22:10.617787 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1bb0f18dccac4472bb4caf530ad6f41d-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-34c19deec5\" (UID: \"1bb0f18dccac4472bb4caf530ad6f41d\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-34c19deec5" Sep 6 01:22:10.617899 kubelet[2729]: I0906 01:22:10.617886 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1bb0f18dccac4472bb4caf530ad6f41d-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-34c19deec5\" (UID: \"1bb0f18dccac4472bb4caf530ad6f41d\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-34c19deec5" Sep 6 01:22:10.617978 kubelet[2729]: I0906 01:22:10.617967 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1bb0f18dccac4472bb4caf530ad6f41d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-34c19deec5\" (UID: \"1bb0f18dccac4472bb4caf530ad6f41d\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-34c19deec5" Sep 6 01:22:10.618068 kubelet[2729]: I0906 01:22:10.618054 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4834508a2b49ebd6ebe17c22e93e874d-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-34c19deec5\" (UID: \"4834508a2b49ebd6ebe17c22e93e874d\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-34c19deec5" Sep 6 01:22:10.618145 kubelet[2729]: I0906 01:22:10.618134 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4834508a2b49ebd6ebe17c22e93e874d-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-34c19deec5\" (UID: \"4834508a2b49ebd6ebe17c22e93e874d\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-34c19deec5" Sep 6 01:22:10.618228 kubelet[2729]: I0906 01:22:10.618217 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4834508a2b49ebd6ebe17c22e93e874d-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-34c19deec5\" (UID: \"4834508a2b49ebd6ebe17c22e93e874d\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-34c19deec5" Sep 6 01:22:10.618328 kubelet[2729]: I0906 01:22:10.618316 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4834508a2b49ebd6ebe17c22e93e874d-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-34c19deec5\" (UID: \"4834508a2b49ebd6ebe17c22e93e874d\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-34c19deec5" Sep 6 01:22:10.618459 kubelet[2729]: I0906 01:22:10.618423 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3c29fedbff69de6afbfcd353a058263d-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-34c19deec5\" (UID: \"3c29fedbff69de6afbfcd353a058263d\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-34c19deec5" Sep 6 01:22:11.180337 kubelet[2729]: I0906 01:22:11.180303 2729 apiserver.go:52] "Watching apiserver" Sep 6 01:22:11.216520 kubelet[2729]: I0906 01:22:11.216476 2729 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 6 01:22:11.287722 kubelet[2729]: I0906 01:22:11.287648 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-34c19deec5" podStartSLOduration=1.287631583 podStartE2EDuration="1.287631583s" podCreationTimestamp="2025-09-06 01:22:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 01:22:11.27629041 +0000 UTC m=+1.164394660" watchObservedRunningTime="2025-09-06 01:22:11.287631583 +0000 UTC m=+1.175735753" Sep 6 01:22:11.307494 kubelet[2729]: I0906 01:22:11.307439 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.8-n-34c19deec5" podStartSLOduration=3.307420005 podStartE2EDuration="3.307420005s" podCreationTimestamp="2025-09-06 01:22:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 01:22:11.288255463 +0000 UTC m=+1.176359673" watchObservedRunningTime="2025-09-06 01:22:11.307420005 +0000 UTC m=+1.195524215" Sep 6 01:22:11.322646 kubelet[2729]: I0906 01:22:11.322576 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.8-n-34c19deec5" podStartSLOduration=1.322559141 podStartE2EDuration="1.322559141s" podCreationTimestamp="2025-09-06 01:22:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 01:22:11.307856925 +0000 UTC m=+1.195961135" watchObservedRunningTime="2025-09-06 01:22:11.322559141 +0000 UTC m=+1.210663351" Sep 6 01:22:11.384795 kubelet[2729]: W0906 01:22:11.384756 2729 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 6 01:22:11.385070 kubelet[2729]: E0906 01:22:11.385043 2729 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.8-n-34c19deec5\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.8-n-34c19deec5" Sep 6 01:22:15.649855 kubelet[2729]: I0906 01:22:15.649796 2729 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 6 01:22:15.650308 env[1586]: time="2025-09-06T01:22:15.650267735Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 6 01:22:15.650529 kubelet[2729]: I0906 01:22:15.650515 2729 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 6 01:22:16.550413 kubelet[2729]: I0906 01:22:16.550380 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dceadc09-8634-47f2-b56f-f37b6f96598a-xtables-lock\") pod \"kube-proxy-lcfmn\" (UID: \"dceadc09-8634-47f2-b56f-f37b6f96598a\") " pod="kube-system/kube-proxy-lcfmn" Sep 6 01:22:16.550624 kubelet[2729]: I0906 01:22:16.550608 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/dceadc09-8634-47f2-b56f-f37b6f96598a-kube-proxy\") pod \"kube-proxy-lcfmn\" (UID: \"dceadc09-8634-47f2-b56f-f37b6f96598a\") " pod="kube-system/kube-proxy-lcfmn" Sep 6 01:22:16.550714 kubelet[2729]: I0906 01:22:16.550700 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dceadc09-8634-47f2-b56f-f37b6f96598a-lib-modules\") pod \"kube-proxy-lcfmn\" (UID: \"dceadc09-8634-47f2-b56f-f37b6f96598a\") " pod="kube-system/kube-proxy-lcfmn" Sep 6 01:22:16.550789 kubelet[2729]: I0906 01:22:16.550774 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9spvm\" (UniqueName: \"kubernetes.io/projected/dceadc09-8634-47f2-b56f-f37b6f96598a-kube-api-access-9spvm\") pod \"kube-proxy-lcfmn\" (UID: \"dceadc09-8634-47f2-b56f-f37b6f96598a\") " pod="kube-system/kube-proxy-lcfmn" Sep 6 01:22:16.660038 kubelet[2729]: I0906 01:22:16.660007 2729 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 6 01:22:16.752632 kubelet[2729]: I0906 01:22:16.752588 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/80885ddc-3780-4a3e-897e-4e268a7dea45-var-lib-calico\") pod \"tigera-operator-58fc44c59b-gm94r\" (UID: \"80885ddc-3780-4a3e-897e-4e268a7dea45\") " pod="tigera-operator/tigera-operator-58fc44c59b-gm94r" Sep 6 01:22:16.752632 kubelet[2729]: I0906 01:22:16.752637 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twjmm\" (UniqueName: \"kubernetes.io/projected/80885ddc-3780-4a3e-897e-4e268a7dea45-kube-api-access-twjmm\") pod \"tigera-operator-58fc44c59b-gm94r\" (UID: \"80885ddc-3780-4a3e-897e-4e268a7dea45\") " pod="tigera-operator/tigera-operator-58fc44c59b-gm94r" Sep 6 01:22:16.786169 env[1586]: time="2025-09-06T01:22:16.785782504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lcfmn,Uid:dceadc09-8634-47f2-b56f-f37b6f96598a,Namespace:kube-system,Attempt:0,}" Sep 6 01:22:16.846564 env[1586]: time="2025-09-06T01:22:16.846067164Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:22:16.846795 env[1586]: time="2025-09-06T01:22:16.846747405Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:22:16.846886 env[1586]: time="2025-09-06T01:22:16.846866405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:22:16.847089 env[1586]: time="2025-09-06T01:22:16.847057565Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec86246371189df66af2c45a3dd0b5a439a98ca6befb9c548e961d15a376f218 pid=2779 runtime=io.containerd.runc.v2 Sep 6 01:22:16.865343 systemd[1]: run-containerd-runc-k8s.io-ec86246371189df66af2c45a3dd0b5a439a98ca6befb9c548e961d15a376f218-runc.fCN7nU.mount: Deactivated successfully. Sep 6 01:22:16.892352 env[1586]: time="2025-09-06T01:22:16.892315130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lcfmn,Uid:dceadc09-8634-47f2-b56f-f37b6f96598a,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec86246371189df66af2c45a3dd0b5a439a98ca6befb9c548e961d15a376f218\"" Sep 6 01:22:16.895306 env[1586]: time="2025-09-06T01:22:16.895194892Z" level=info msg="CreateContainer within sandbox \"ec86246371189df66af2c45a3dd0b5a439a98ca6befb9c548e961d15a376f218\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 6 01:22:16.968431 env[1586]: time="2025-09-06T01:22:16.968379005Z" level=info msg="CreateContainer within sandbox \"ec86246371189df66af2c45a3dd0b5a439a98ca6befb9c548e961d15a376f218\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"54112d67dc9cce905dfafd1d5a20b679bcdc922701cc3bfc2bf681fd5b7babb1\"" Sep 6 01:22:16.969236 env[1586]: time="2025-09-06T01:22:16.969212326Z" level=info msg="StartContainer for \"54112d67dc9cce905dfafd1d5a20b679bcdc922701cc3bfc2bf681fd5b7babb1\"" Sep 6 01:22:17.028763 env[1586]: time="2025-09-06T01:22:17.028710104Z" level=info msg="StartContainer for \"54112d67dc9cce905dfafd1d5a20b679bcdc922701cc3bfc2bf681fd5b7babb1\" returns successfully" Sep 6 01:22:17.033001 env[1586]: time="2025-09-06T01:22:17.032970188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-gm94r,Uid:80885ddc-3780-4a3e-897e-4e268a7dea45,Namespace:tigera-operator,Attempt:0,}" Sep 6 01:22:17.101914 env[1586]: time="2025-09-06T01:22:17.101387414Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:22:17.102087 env[1586]: time="2025-09-06T01:22:17.101425934Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:22:17.102087 env[1586]: time="2025-09-06T01:22:17.101436414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:22:17.102337 env[1586]: time="2025-09-06T01:22:17.102303655Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9d634fdefb7a4567b08ae2b9f18320df2ddc7187cf9b1333bbea3c37b727d3e8 pid=2864 runtime=io.containerd.runc.v2 Sep 6 01:22:17.144000 audit[2921]: NETFILTER_CFG table=mangle:41 family=2 entries=1 op=nft_register_chain pid=2921 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:22:17.151168 kernel: kauditd_printk_skb: 4 callbacks suppressed Sep 6 01:22:17.151312 kernel: audit: type=1325 audit(1757121737.144:248): table=mangle:41 family=2 entries=1 op=nft_register_chain pid=2921 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:22:17.144000 audit[2921]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff8920d10 a2=0 a3=1 items=0 ppid=2834 pid=2921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:17.191814 kernel: audit: type=1300 audit(1757121737.144:248): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff8920d10 a2=0 a3=1 items=0 ppid=2834 pid=2921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:17.194903 kernel: audit: type=1327 audit(1757121737.144:248): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Sep 6 01:22:17.144000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Sep 6 01:22:17.196043 env[1586]: time="2025-09-06T01:22:17.195988106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-gm94r,Uid:80885ddc-3780-4a3e-897e-4e268a7dea45,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"9d634fdefb7a4567b08ae2b9f18320df2ddc7187cf9b1333bbea3c37b727d3e8\"" Sep 6 01:22:17.199550 env[1586]: time="2025-09-06T01:22:17.199517029Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 6 01:22:17.144000 audit[2922]: NETFILTER_CFG table=mangle:42 family=10 entries=1 op=nft_register_chain pid=2922 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 01:22:17.221077 kernel: audit: type=1325 audit(1757121737.144:249): table=mangle:42 family=10 entries=1 op=nft_register_chain pid=2922 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 01:22:17.221145 kernel: audit: type=1300 audit(1757121737.144:249): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdf594b70 a2=0 a3=1 items=0 ppid=2834 pid=2922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:17.144000 audit[2922]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdf594b70 a2=0 a3=1 items=0 ppid=2834 pid=2922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:17.144000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Sep 6 01:22:17.262944 kernel: audit: type=1327 audit(1757121737.144:249): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Sep 6 01:22:17.144000 audit[2923]: NETFILTER_CFG table=nat:43 family=2 entries=1 op=nft_register_chain pid=2923 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:22:17.277489 kernel: audit: type=1325 audit(1757121737.144:250): table=nat:43 family=2 entries=1 op=nft_register_chain pid=2923 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:22:17.144000 audit[2923]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffee474f70 a2=0 a3=1 items=0 ppid=2834 pid=2923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:17.144000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Sep 6 01:22:17.320728 kernel: audit: type=1300 audit(1757121737.144:250): arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffee474f70 a2=0 a3=1 items=0 ppid=2834 pid=2923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:17.320872 kernel: audit: type=1327 audit(1757121737.144:250): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Sep 6 01:22:17.144000 audit[2924]: NETFILTER_CFG table=nat:44 family=10 entries=1 op=nft_register_chain pid=2924 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 01:22:17.334970 kernel: audit: type=1325 audit(1757121737.144:251): table=nat:44 family=10 entries=1 op=nft_register_chain pid=2924 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 01:22:17.144000 audit[2924]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcf31ab20 a2=0 a3=1 items=0 ppid=2834 pid=2924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:17.144000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Sep 6 01:22:17.150000 audit[2925]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_chain pid=2925 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:22:17.150000 audit[2925]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcd169700 a2=0 a3=1 items=0 ppid=2834 pid=2925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:17.150000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Sep 6 01:22:17.150000 audit[2926]: NETFILTER_CFG table=filter:46 family=10 entries=1 op=nft_register_chain pid=2926 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 01:22:17.150000 audit[2926]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe3a8a000 a2=0 a3=1 items=0 ppid=2834 pid=2926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:17.150000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Sep 6 01:22:17.245000 audit[2930]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2930 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:22:17.245000 audit[2930]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffe3337f10 a2=0 a3=1 items=0 ppid=2834 pid=2930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:17.245000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Sep 6 01:22:17.253000 audit[2932]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2932 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:22:17.253000 audit[2932]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffcfeadf90 a2=0 a3=1 items=0 ppid=2834 pid=2932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:17.253000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Sep 6 01:22:17.268000 audit[2935]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_rule pid=2935 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:22:17.268000 audit[2935]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffcd992d60 a2=0 a3=1 items=0 ppid=2834 pid=2935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:17.268000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Sep 6 01:22:17.268000 audit[2936]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_chain pid=2936 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:22:17.268000 audit[2936]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffde83a330 a2=0 a3=1 items=0 ppid=2834 pid=2936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:17.268000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Sep 6 01:22:17.268000 audit[2938]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2938 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:22:17.268000 audit[2938]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffffa703b80 a2=0 a3=1 items=0 ppid=2834 pid=2938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:17.268000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Sep 6 01:22:17.273000 audit[2939]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2939 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:22:17.273000 audit[2939]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe310ba20 a2=0 a3=1 items=0 ppid=2834 pid=2939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:17.273000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Sep 6 01:22:17.277000 audit[2941]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2941 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:22:17.277000 audit[2941]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffd37be5a0 a2=0 a3=1 items=0 ppid=2834 pid=2941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:17.277000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Sep 6 01:22:17.320000 audit[2944]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_rule pid=2944 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:22:17.320000 audit[2944]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffff75f3820 a2=0 a3=1 items=0 ppid=2834 pid=2944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:17.320000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Sep 6 01:22:17.320000 audit[2945]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_chain pid=2945 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:22:17.320000 audit[2945]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdf816810 a2=0 a3=1 items=0 ppid=2834 pid=2945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:17.320000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Sep 6 01:22:17.325000 audit[2947]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2947 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:22:17.325000 audit[2947]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffffafbea70 a2=0 a3=1 items=0 ppid=2834 pid=2947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:17.325000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Sep 6 01:22:17.330000 audit[2948]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_chain pid=2948 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:22:17.330000 audit[2948]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd0b10540 a2=0 a3=1 items=0 ppid=2834 pid=2948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:17.330000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Sep 6 01:22:17.330000 audit[2950]: NETFILTER_CFG table=filter:58 family=2 entries=1 op=nft_register_rule pid=2950 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:22:17.330000 audit[2950]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc621b2e0 a2=0 a3=1 items=0 ppid=2834 pid=2950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:17.330000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Sep 6 01:22:17.337000 audit[2953]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_rule pid=2953 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:22:17.337000 audit[2953]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffeb8a4680 a2=0 a3=1 items=0 ppid=2834 pid=2953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:17.337000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Sep 6 01:22:17.340000 audit[2956]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_rule pid=2956 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:22:17.340000 audit[2956]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc1e52880 a2=0 a3=1 items=0 ppid=2834 pid=2956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:17.340000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Sep 6 01:22:17.341000 audit[2957]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2957 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:22:17.341000 audit[2957]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe2d7a7b0 a2=0 a3=1 items=0 ppid=2834 pid=2957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:17.341000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Sep 6 01:22:17.344000 audit[2959]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2959 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:22:17.344000 audit[2959]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffcb7f9e10 a2=0 a3=1 items=0 ppid=2834 pid=2959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:17.344000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 6 01:22:17.348000 audit[2962]: NETFILTER_CFG table=nat:63 family=2 entries=1 op=nft_register_rule pid=2962 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:22:17.348000 audit[2962]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc8903700 a2=0 a3=1 items=0 ppid=2834 pid=2962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:17.348000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 6 01:22:17.349000 audit[2963]: NETFILTER_CFG table=nat:64 family=2 entries=1 op=nft_register_chain pid=2963 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:22:17.349000 audit[2963]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffffdea150 a2=0 a3=1 items=0 ppid=2834 pid=2963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:17.349000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Sep 6 01:22:17.352000 audit[2965]: NETFILTER_CFG table=nat:65 family=2 entries=1 op=nft_register_rule pid=2965 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:22:17.352000 audit[2965]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=fffff7453d20 a2=0 a3=1 items=0 ppid=2834 pid=2965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:17.352000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Sep 6 01:22:17.394646 kubelet[2729]: I0906 01:22:17.394589 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lcfmn" podStartSLOduration=1.394573978 podStartE2EDuration="1.394573978s" podCreationTimestamp="2025-09-06 01:22:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 01:22:17.393290097 +0000 UTC m=+7.281394267" watchObservedRunningTime="2025-09-06 01:22:17.394573978 +0000 UTC m=+7.282678188" Sep 6 01:22:17.409000 audit[2971]: NETFILTER_CFG table=filter:66 family=2 entries=8 op=nft_register_rule pid=2971 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:22:17.409000 audit[2971]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffffc837b30 a2=0 a3=1 items=0 ppid=2834 pid=2971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:17.409000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:22:17.441000 audit[2971]: NETFILTER_CFG table=nat:67 family=2 entries=14 op=nft_register_chain pid=2971 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:22:17.441000 audit[2971]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=fffffc837b30 a2=0 a3=1 items=0 ppid=2834 pid=2971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:17.441000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:22:17.443000 audit[2976]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2976 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 01:22:17.443000 audit[2976]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffd9089f40 a2=0 a3=1 items=0 ppid=2834 pid=2976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:17.443000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Sep 6 01:22:17.446000 audit[2978]: NETFILTER_CFG table=filter:69 family=10 entries=2 op=nft_register_chain pid=2978 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 01:22:17.446000 audit[2978]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffc247f9b0 a2=0 a3=1 items=0 ppid=2834 pid=2978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:17.446000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Sep 6 01:22:17.449000 audit[2981]: NETFILTER_CFG table=filter:70 family=10 entries=2 op=nft_register_chain pid=2981 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 01:22:17.449000 audit[2981]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffd320ebb0 a2=0 a3=1 items=0 ppid=2834 pid=2981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:17.449000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Sep 6 01:22:17.450000 audit[2982]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_chain pid=2982 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 01:22:17.450000 audit[2982]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdf7055f0 a2=0 a3=1 items=0 ppid=2834 pid=2982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:17.450000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Sep 6 01:22:17.452000 audit[2984]: NETFILTER_CFG table=filter:72 family=10 entries=1 op=nft_register_rule pid=2984 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 01:22:17.452000 audit[2984]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe9b648c0 a2=0 a3=1 items=0 ppid=2834 pid=2984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:17.452000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Sep 6 01:22:17.453000 audit[2985]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2985 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 01:22:17.453000 audit[2985]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe3999a30 a2=0 a3=1 items=0 ppid=2834 pid=2985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:17.453000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Sep 6 01:22:17.455000 audit[2987]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2987 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 01:22:17.455000 audit[2987]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffe87f8340 a2=0 a3=1 items=0 ppid=2834 pid=2987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:17.455000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Sep 6 01:22:17.458000 audit[2990]: NETFILTER_CFG table=filter:75 family=10 entries=2 op=nft_register_chain pid=2990 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 01:22:17.458000 audit[2990]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=fffffb002670 a2=0 a3=1 items=0 ppid=2834 pid=2990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:17.458000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Sep 6 01:22:17.460000 audit[2991]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_chain pid=2991 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 01:22:17.460000 audit[2991]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffa67c0c0 a2=0 a3=1 items=0 ppid=2834 pid=2991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:17.460000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Sep 6 01:22:17.462000 audit[2993]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2993 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 01:22:17.462000 audit[2993]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff8d1d3c0 a2=0 a3=1 items=0 ppid=2834 pid=2993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:17.462000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Sep 6 01:22:17.463000 audit[2994]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_chain pid=2994 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 01:22:17.463000 audit[2994]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd4c25d10 a2=0 a3=1 items=0 ppid=2834 pid=2994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:17.463000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Sep 6 01:22:17.465000 audit[2996]: NETFILTER_CFG table=filter:79 family=10 entries=1 op=nft_register_rule pid=2996 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 01:22:17.465000 audit[2996]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffff847cf0 a2=0 a3=1 items=0 ppid=2834 pid=2996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:17.465000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Sep 6 01:22:17.468000 audit[2999]: NETFILTER_CFG table=filter:80 family=10 entries=1 op=nft_register_rule pid=2999 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 01:22:17.468000 audit[2999]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffcf406d10 a2=0 a3=1 items=0 ppid=2834 pid=2999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:17.468000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Sep 6 01:22:17.471000 audit[3002]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_rule pid=3002 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 01:22:17.471000 audit[3002]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffeb7ec4d0 a2=0 a3=1 items=0 ppid=2834 pid=3002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:17.471000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Sep 6 01:22:17.472000 audit[3003]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=3003 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 01:22:17.472000 audit[3003]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff51cf4e0 a2=0 a3=1 items=0 ppid=2834 pid=3003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:17.472000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Sep 6 01:22:17.474000 audit[3005]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=3005 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 01:22:17.474000 audit[3005]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffdaf873a0 a2=0 a3=1 items=0 ppid=2834 pid=3005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:17.474000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 6 01:22:17.477000 audit[3008]: NETFILTER_CFG table=nat:84 family=10 entries=2 op=nft_register_chain pid=3008 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 01:22:17.477000 audit[3008]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=ffffe433b0d0 a2=0 a3=1 items=0 ppid=2834 pid=3008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:17.477000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 6 01:22:17.478000 audit[3009]: NETFILTER_CFG table=nat:85 family=10 entries=1 op=nft_register_chain pid=3009 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 01:22:17.478000 audit[3009]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc5f2ab60 a2=0 a3=1 items=0 ppid=2834 pid=3009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:17.478000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Sep 6 01:22:17.480000 audit[3011]: NETFILTER_CFG table=nat:86 family=10 entries=2 op=nft_register_chain pid=3011 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 01:22:17.480000 audit[3011]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=fffff1aef6d0 a2=0 a3=1 items=0 ppid=2834 pid=3011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:17.480000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Sep 6 01:22:17.481000 audit[3012]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_chain pid=3012 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 01:22:17.481000 audit[3012]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffef464c70 a2=0 a3=1 items=0 ppid=2834 pid=3012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:17.481000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Sep 6 01:22:17.483000 audit[3014]: NETFILTER_CFG table=filter:88 family=10 entries=1 op=nft_register_rule pid=3014 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 01:22:17.483000 audit[3014]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffde1f6ca0 a2=0 a3=1 items=0 ppid=2834 pid=3014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:17.483000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 6 01:22:17.486000 audit[3017]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_rule pid=3017 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 01:22:17.486000 audit[3017]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffc0df7330 a2=0 a3=1 items=0 ppid=2834 pid=3017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:17.486000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 6 01:22:17.488000 audit[3019]: NETFILTER_CFG table=filter:90 family=10 entries=3 op=nft_register_rule pid=3019 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Sep 6 01:22:17.488000 audit[3019]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=ffffdf6b09b0 a2=0 a3=1 items=0 ppid=2834 pid=3019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:17.488000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:22:17.489000 audit[3019]: NETFILTER_CFG table=nat:91 family=10 entries=7 op=nft_register_chain pid=3019 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Sep 6 01:22:17.489000 audit[3019]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffdf6b09b0 a2=0 a3=1 items=0 ppid=2834 pid=3019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:17.489000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:22:18.664097 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1159308522.mount: Deactivated successfully. Sep 6 01:22:19.484444 env[1586]: time="2025-09-06T01:22:19.484379044Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.38.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:22:19.499695 env[1586]: time="2025-09-06T01:22:19.499655698Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:22:19.509792 env[1586]: time="2025-09-06T01:22:19.509765187Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.38.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:22:19.516900 env[1586]: time="2025-09-06T01:22:19.516865674Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:22:19.517572 env[1586]: time="2025-09-06T01:22:19.517543875Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 6 01:22:19.520031 env[1586]: time="2025-09-06T01:22:19.520004437Z" level=info msg="CreateContainer within sandbox \"9d634fdefb7a4567b08ae2b9f18320df2ddc7187cf9b1333bbea3c37b727d3e8\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 6 01:22:19.566487 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2644357295.mount: Deactivated successfully. Sep 6 01:22:19.585896 env[1586]: time="2025-09-06T01:22:19.585855298Z" level=info msg="CreateContainer within sandbox \"9d634fdefb7a4567b08ae2b9f18320df2ddc7187cf9b1333bbea3c37b727d3e8\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8c7e958756aa6b8d24c45e944e78a65be0ba2ee00951bbb60b54b22998b74a7d\"" Sep 6 01:22:19.586303 env[1586]: time="2025-09-06T01:22:19.586235698Z" level=info msg="StartContainer for \"8c7e958756aa6b8d24c45e944e78a65be0ba2ee00951bbb60b54b22998b74a7d\"" Sep 6 01:22:19.642576 env[1586]: time="2025-09-06T01:22:19.642523271Z" level=info msg="StartContainer for \"8c7e958756aa6b8d24c45e944e78a65be0ba2ee00951bbb60b54b22998b74a7d\" returns successfully" Sep 6 01:22:22.612368 kubelet[2729]: I0906 01:22:22.612307 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-gm94r" podStartSLOduration=4.290870659 podStartE2EDuration="6.612279268s" podCreationTimestamp="2025-09-06 01:22:16 +0000 UTC" firstStartedPulling="2025-09-06 01:22:17.197329347 +0000 UTC m=+7.085433517" lastFinishedPulling="2025-09-06 01:22:19.518737916 +0000 UTC m=+9.406842126" observedRunningTime="2025-09-06 01:22:20.419632906 +0000 UTC m=+10.307737076" watchObservedRunningTime="2025-09-06 01:22:22.612279268 +0000 UTC m=+12.500383478" Sep 6 01:22:25.589532 sudo[1958]: pam_unix(sudo:session): session closed for user root Sep 6 01:22:25.588000 audit[1958]: USER_END pid=1958 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 6 01:22:25.594425 kernel: kauditd_printk_skb: 143 callbacks suppressed Sep 6 01:22:25.594500 kernel: audit: type=1106 audit(1757121745.588:299): pid=1958 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 6 01:22:25.603000 audit[1958]: CRED_DISP pid=1958 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 6 01:22:25.638495 kernel: audit: type=1104 audit(1757121745.603:300): pid=1958 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 6 01:22:25.700029 sshd[1954]: pam_unix(sshd:session): session closed for user core Sep 6 01:22:25.699000 audit[1954]: USER_END pid=1954 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:22:25.702941 systemd-logind[1571]: Session 9 logged out. Waiting for processes to exit. Sep 6 01:22:25.706600 systemd[1]: sshd@6-10.200.20.27:22-10.200.16.10:53414.service: Deactivated successfully. Sep 6 01:22:25.707318 systemd[1]: session-9.scope: Deactivated successfully. Sep 6 01:22:25.708743 systemd-logind[1571]: Removed session 9. Sep 6 01:22:25.699000 audit[1954]: CRED_DISP pid=1954 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:22:25.761605 kernel: audit: type=1106 audit(1757121745.699:301): pid=1954 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:22:25.761736 kernel: audit: type=1104 audit(1757121745.699:302): pid=1954 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:22:25.761763 kernel: audit: type=1131 audit(1757121745.705:303): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.27:22-10.200.16.10:53414 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:25.705000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.27:22-10.200.16.10:53414 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:27.148000 audit[3102]: NETFILTER_CFG table=filter:92 family=2 entries=15 op=nft_register_rule pid=3102 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:22:27.148000 audit[3102]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffd5533810 a2=0 a3=1 items=0 ppid=2834 pid=3102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:27.198533 kernel: audit: type=1325 audit(1757121747.148:304): table=filter:92 family=2 entries=15 op=nft_register_rule pid=3102 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:22:27.198671 kernel: audit: type=1300 audit(1757121747.148:304): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffd5533810 a2=0 a3=1 items=0 ppid=2834 pid=3102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:27.148000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:22:27.216067 kernel: audit: type=1327 audit(1757121747.148:304): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:22:27.200000 audit[3102]: NETFILTER_CFG table=nat:93 family=2 entries=12 op=nft_register_rule pid=3102 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:22:27.231340 kernel: audit: type=1325 audit(1757121747.200:305): table=nat:93 family=2 entries=12 op=nft_register_rule pid=3102 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:22:27.200000 audit[3102]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd5533810 a2=0 a3=1 items=0 ppid=2834 pid=3102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:27.261269 kernel: audit: type=1300 audit(1757121747.200:305): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd5533810 a2=0 a3=1 items=0 ppid=2834 pid=3102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:27.200000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:22:27.265000 audit[3105]: NETFILTER_CFG table=filter:94 family=2 entries=16 op=nft_register_rule pid=3105 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:22:27.265000 audit[3105]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffffdec2330 a2=0 a3=1 items=0 ppid=2834 pid=3105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:27.265000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:22:27.270000 audit[3105]: NETFILTER_CFG table=nat:95 family=2 entries=12 op=nft_register_rule pid=3105 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:22:27.270000 audit[3105]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffdec2330 a2=0 a3=1 items=0 ppid=2834 pid=3105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:27.270000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:22:31.765000 audit[3107]: NETFILTER_CFG table=filter:96 family=2 entries=17 op=nft_register_rule pid=3107 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:22:31.771555 kernel: kauditd_printk_skb: 7 callbacks suppressed Sep 6 01:22:31.771672 kernel: audit: type=1325 audit(1757121751.765:308): table=filter:96 family=2 entries=17 op=nft_register_rule pid=3107 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:22:31.765000 audit[3107]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffc945bfe0 a2=0 a3=1 items=0 ppid=2834 pid=3107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:31.820281 kernel: audit: type=1300 audit(1757121751.765:308): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffc945bfe0 a2=0 a3=1 items=0 ppid=2834 pid=3107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:31.820406 kernel: audit: type=1327 audit(1757121751.765:308): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:22:31.765000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:22:31.793000 audit[3107]: NETFILTER_CFG table=nat:97 family=2 entries=12 op=nft_register_rule pid=3107 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:22:31.849942 kernel: audit: type=1325 audit(1757121751.793:309): table=nat:97 family=2 entries=12 op=nft_register_rule pid=3107 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:22:31.793000 audit[3107]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc945bfe0 a2=0 a3=1 items=0 ppid=2834 pid=3107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:31.879092 kernel: audit: type=1300 audit(1757121751.793:309): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc945bfe0 a2=0 a3=1 items=0 ppid=2834 pid=3107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:31.793000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:22:31.892682 kernel: audit: type=1327 audit(1757121751.793:309): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:22:31.900000 audit[3109]: NETFILTER_CFG table=filter:98 family=2 entries=19 op=nft_register_rule pid=3109 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:22:31.917843 kernel: audit: type=1325 audit(1757121751.900:310): table=filter:98 family=2 entries=19 op=nft_register_rule pid=3109 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:22:31.900000 audit[3109]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffe928b330 a2=0 a3=1 items=0 ppid=2834 pid=3109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:31.945936 kernel: audit: type=1300 audit(1757121751.900:310): arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffe928b330 a2=0 a3=1 items=0 ppid=2834 pid=3109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:31.948552 kubelet[2729]: I0906 01:22:31.948503 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/809ca197-053c-4961-b356-592598e26c75-tigera-ca-bundle\") pod \"calico-typha-58458b4499-rwghs\" (UID: \"809ca197-053c-4961-b356-592598e26c75\") " pod="calico-system/calico-typha-58458b4499-rwghs" Sep 6 01:22:31.948552 kubelet[2729]: I0906 01:22:31.948547 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8cpq\" (UniqueName: \"kubernetes.io/projected/809ca197-053c-4961-b356-592598e26c75-kube-api-access-h8cpq\") pod \"calico-typha-58458b4499-rwghs\" (UID: \"809ca197-053c-4961-b356-592598e26c75\") " pod="calico-system/calico-typha-58458b4499-rwghs" Sep 6 01:22:31.948954 kubelet[2729]: I0906 01:22:31.948565 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/809ca197-053c-4961-b356-592598e26c75-typha-certs\") pod \"calico-typha-58458b4499-rwghs\" (UID: \"809ca197-053c-4961-b356-592598e26c75\") " pod="calico-system/calico-typha-58458b4499-rwghs" Sep 6 01:22:31.900000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:22:31.964994 kernel: audit: type=1327 audit(1757121751.900:310): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:22:31.949000 audit[3109]: NETFILTER_CFG table=nat:99 family=2 entries=12 op=nft_register_rule pid=3109 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:22:31.981705 kernel: audit: type=1325 audit(1757121751.949:311): table=nat:99 family=2 entries=12 op=nft_register_rule pid=3109 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:22:31.949000 audit[3109]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe928b330 a2=0 a3=1 items=0 ppid=2834 pid=3109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:31.949000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:22:32.150264 kubelet[2729]: I0906 01:22:32.150144 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/bad9dc96-d631-43af-b3cd-7e0e32396ff9-cni-bin-dir\") pod \"calico-node-2xsxz\" (UID: \"bad9dc96-d631-43af-b3cd-7e0e32396ff9\") " pod="calico-system/calico-node-2xsxz" Sep 6 01:22:32.150443 kubelet[2729]: I0906 01:22:32.150428 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bad9dc96-d631-43af-b3cd-7e0e32396ff9-tigera-ca-bundle\") pod \"calico-node-2xsxz\" (UID: \"bad9dc96-d631-43af-b3cd-7e0e32396ff9\") " pod="calico-system/calico-node-2xsxz" Sep 6 01:22:32.150540 kubelet[2729]: I0906 01:22:32.150524 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvjc5\" (UniqueName: \"kubernetes.io/projected/bad9dc96-d631-43af-b3cd-7e0e32396ff9-kube-api-access-xvjc5\") pod \"calico-node-2xsxz\" (UID: \"bad9dc96-d631-43af-b3cd-7e0e32396ff9\") " pod="calico-system/calico-node-2xsxz" Sep 6 01:22:32.150623 kubelet[2729]: I0906 01:22:32.150611 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/bad9dc96-d631-43af-b3cd-7e0e32396ff9-cni-log-dir\") pod \"calico-node-2xsxz\" (UID: \"bad9dc96-d631-43af-b3cd-7e0e32396ff9\") " pod="calico-system/calico-node-2xsxz" Sep 6 01:22:32.150702 kubelet[2729]: I0906 01:22:32.150690 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/bad9dc96-d631-43af-b3cd-7e0e32396ff9-flexvol-driver-host\") pod \"calico-node-2xsxz\" (UID: \"bad9dc96-d631-43af-b3cd-7e0e32396ff9\") " pod="calico-system/calico-node-2xsxz" Sep 6 01:22:32.150785 kubelet[2729]: I0906 01:22:32.150774 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/bad9dc96-d631-43af-b3cd-7e0e32396ff9-policysync\") pod \"calico-node-2xsxz\" (UID: \"bad9dc96-d631-43af-b3cd-7e0e32396ff9\") " pod="calico-system/calico-node-2xsxz" Sep 6 01:22:32.150873 kubelet[2729]: I0906 01:22:32.150861 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bad9dc96-d631-43af-b3cd-7e0e32396ff9-lib-modules\") pod \"calico-node-2xsxz\" (UID: \"bad9dc96-d631-43af-b3cd-7e0e32396ff9\") " pod="calico-system/calico-node-2xsxz" Sep 6 01:22:32.150968 kubelet[2729]: I0906 01:22:32.150956 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/bad9dc96-d631-43af-b3cd-7e0e32396ff9-cni-net-dir\") pod \"calico-node-2xsxz\" (UID: \"bad9dc96-d631-43af-b3cd-7e0e32396ff9\") " pod="calico-system/calico-node-2xsxz" Sep 6 01:22:32.151060 kubelet[2729]: I0906 01:22:32.151048 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bad9dc96-d631-43af-b3cd-7e0e32396ff9-var-lib-calico\") pod \"calico-node-2xsxz\" (UID: \"bad9dc96-d631-43af-b3cd-7e0e32396ff9\") " pod="calico-system/calico-node-2xsxz" Sep 6 01:22:32.151154 kubelet[2729]: I0906 01:22:32.151139 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/bad9dc96-d631-43af-b3cd-7e0e32396ff9-node-certs\") pod \"calico-node-2xsxz\" (UID: \"bad9dc96-d631-43af-b3cd-7e0e32396ff9\") " pod="calico-system/calico-node-2xsxz" Sep 6 01:22:32.151276 kubelet[2729]: I0906 01:22:32.151262 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/bad9dc96-d631-43af-b3cd-7e0e32396ff9-var-run-calico\") pod \"calico-node-2xsxz\" (UID: \"bad9dc96-d631-43af-b3cd-7e0e32396ff9\") " pod="calico-system/calico-node-2xsxz" Sep 6 01:22:32.151376 kubelet[2729]: I0906 01:22:32.151363 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bad9dc96-d631-43af-b3cd-7e0e32396ff9-xtables-lock\") pod \"calico-node-2xsxz\" (UID: \"bad9dc96-d631-43af-b3cd-7e0e32396ff9\") " pod="calico-system/calico-node-2xsxz" Sep 6 01:22:32.178084 env[1586]: time="2025-09-06T01:22:32.177692760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-58458b4499-rwghs,Uid:809ca197-053c-4961-b356-592598e26c75,Namespace:calico-system,Attempt:0,}" Sep 6 01:22:32.204836 kubelet[2729]: E0906 01:22:32.204781 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r5mms" podUID="d9edb798-4304-4b76-a60a-df0eaa0d87c0" Sep 6 01:22:32.239314 env[1586]: time="2025-09-06T01:22:32.238997125Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:22:32.239314 env[1586]: time="2025-09-06T01:22:32.239028805Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:22:32.239314 env[1586]: time="2025-09-06T01:22:32.239038565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:22:32.239314 env[1586]: time="2025-09-06T01:22:32.239134325Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2004cb6c9e9438234edd63570a9eaea738823039960a0d68ba014c09af21c854 pid=3119 runtime=io.containerd.runc.v2 Sep 6 01:22:32.272229 kubelet[2729]: E0906 01:22:32.271527 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:32.272229 kubelet[2729]: W0906 01:22:32.271568 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:32.272229 kubelet[2729]: E0906 01:22:32.271594 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:32.272654 kubelet[2729]: E0906 01:22:32.272632 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:32.272704 kubelet[2729]: W0906 01:22:32.272650 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:32.272704 kubelet[2729]: E0906 01:22:32.272677 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:32.273517 kubelet[2729]: E0906 01:22:32.273131 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:32.273517 kubelet[2729]: W0906 01:22:32.273146 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:32.273517 kubelet[2729]: E0906 01:22:32.273158 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:32.281329 kubelet[2729]: E0906 01:22:32.277354 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:32.281329 kubelet[2729]: W0906 01:22:32.277383 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:32.281329 kubelet[2729]: E0906 01:22:32.277490 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:32.281525 kubelet[2729]: E0906 01:22:32.281418 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:32.281525 kubelet[2729]: W0906 01:22:32.281437 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:32.281589 kubelet[2729]: E0906 01:22:32.281564 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:32.286282 kubelet[2729]: E0906 01:22:32.283334 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:32.286282 kubelet[2729]: W0906 01:22:32.283351 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:32.286282 kubelet[2729]: E0906 01:22:32.283375 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:32.286282 kubelet[2729]: E0906 01:22:32.283575 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:32.286282 kubelet[2729]: W0906 01:22:32.283584 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:32.286282 kubelet[2729]: E0906 01:22:32.283595 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:32.286282 kubelet[2729]: E0906 01:22:32.285353 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:32.286282 kubelet[2729]: W0906 01:22:32.285371 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:32.286282 kubelet[2729]: E0906 01:22:32.285387 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:32.286282 kubelet[2729]: E0906 01:22:32.285640 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:32.286608 kubelet[2729]: W0906 01:22:32.285649 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:32.286608 kubelet[2729]: E0906 01:22:32.285659 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:32.349304 kubelet[2729]: E0906 01:22:32.348203 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:32.349304 kubelet[2729]: W0906 01:22:32.348246 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:32.349304 kubelet[2729]: E0906 01:22:32.348283 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:32.350087 kubelet[2729]: E0906 01:22:32.350056 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:32.350087 kubelet[2729]: W0906 01:22:32.350080 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:32.350176 kubelet[2729]: E0906 01:22:32.350096 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:32.353344 kubelet[2729]: E0906 01:22:32.353329 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:32.353449 kubelet[2729]: W0906 01:22:32.353437 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:32.353506 kubelet[2729]: E0906 01:22:32.353495 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:32.353587 kubelet[2729]: I0906 01:22:32.353574 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d9edb798-4304-4b76-a60a-df0eaa0d87c0-socket-dir\") pod \"csi-node-driver-r5mms\" (UID: \"d9edb798-4304-4b76-a60a-df0eaa0d87c0\") " pod="calico-system/csi-node-driver-r5mms" Sep 6 01:22:32.353799 kubelet[2729]: E0906 01:22:32.353785 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:32.353865 kubelet[2729]: W0906 01:22:32.353854 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:32.353939 kubelet[2729]: E0906 01:22:32.353926 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:32.354011 kubelet[2729]: I0906 01:22:32.353999 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d9edb798-4304-4b76-a60a-df0eaa0d87c0-kubelet-dir\") pod \"csi-node-driver-r5mms\" (UID: \"d9edb798-4304-4b76-a60a-df0eaa0d87c0\") " pod="calico-system/csi-node-driver-r5mms" Sep 6 01:22:32.354270 kubelet[2729]: E0906 01:22:32.354221 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:32.354270 kubelet[2729]: W0906 01:22:32.354257 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:32.354364 kubelet[2729]: E0906 01:22:32.354281 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:32.356367 kubelet[2729]: E0906 01:22:32.356341 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:32.356367 kubelet[2729]: W0906 01:22:32.356361 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:32.356478 kubelet[2729]: E0906 01:22:32.356380 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:32.356643 kubelet[2729]: E0906 01:22:32.356625 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:32.356643 kubelet[2729]: W0906 01:22:32.356639 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:32.356714 kubelet[2729]: E0906 01:22:32.356660 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:32.356714 kubelet[2729]: I0906 01:22:32.356681 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d9edb798-4304-4b76-a60a-df0eaa0d87c0-registration-dir\") pod \"csi-node-driver-r5mms\" (UID: \"d9edb798-4304-4b76-a60a-df0eaa0d87c0\") " pod="calico-system/csi-node-driver-r5mms" Sep 6 01:22:32.356901 kubelet[2729]: E0906 01:22:32.356879 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:32.356901 kubelet[2729]: W0906 01:22:32.356896 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:32.357016 kubelet[2729]: E0906 01:22:32.356998 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:32.357085 kubelet[2729]: I0906 01:22:32.357071 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqx2z\" (UniqueName: \"kubernetes.io/projected/d9edb798-4304-4b76-a60a-df0eaa0d87c0-kube-api-access-pqx2z\") pod \"csi-node-driver-r5mms\" (UID: \"d9edb798-4304-4b76-a60a-df0eaa0d87c0\") " pod="calico-system/csi-node-driver-r5mms" Sep 6 01:22:32.359005 kubelet[2729]: E0906 01:22:32.358967 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:32.359005 kubelet[2729]: W0906 01:22:32.358994 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:32.359898 kubelet[2729]: E0906 01:22:32.359376 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:32.359991 env[1586]: time="2025-09-06T01:22:32.359585493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2xsxz,Uid:bad9dc96-d631-43af-b3cd-7e0e32396ff9,Namespace:calico-system,Attempt:0,}" Sep 6 01:22:32.360371 kubelet[2729]: E0906 01:22:32.360346 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:32.360371 kubelet[2729]: W0906 01:22:32.360365 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:32.360478 kubelet[2729]: E0906 01:22:32.360458 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:32.361365 kubelet[2729]: E0906 01:22:32.361341 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:32.361365 kubelet[2729]: W0906 01:22:32.361359 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:32.361516 kubelet[2729]: E0906 01:22:32.361495 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:32.361554 kubelet[2729]: I0906 01:22:32.361525 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d9edb798-4304-4b76-a60a-df0eaa0d87c0-varrun\") pod \"csi-node-driver-r5mms\" (UID: \"d9edb798-4304-4b76-a60a-df0eaa0d87c0\") " pod="calico-system/csi-node-driver-r5mms" Sep 6 01:22:32.362423 kubelet[2729]: E0906 01:22:32.362394 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:32.362423 kubelet[2729]: W0906 01:22:32.362418 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:32.362573 kubelet[2729]: E0906 01:22:32.362555 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:32.362814 kubelet[2729]: E0906 01:22:32.362793 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:32.362876 kubelet[2729]: W0906 01:22:32.362815 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:32.362876 kubelet[2729]: E0906 01:22:32.362829 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:32.363192 kubelet[2729]: E0906 01:22:32.363142 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:32.363192 kubelet[2729]: W0906 01:22:32.363165 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:32.363192 kubelet[2729]: E0906 01:22:32.363183 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:32.363749 kubelet[2729]: E0906 01:22:32.363724 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:32.363818 kubelet[2729]: W0906 01:22:32.363757 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:32.363818 kubelet[2729]: E0906 01:22:32.363771 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:32.365371 kubelet[2729]: E0906 01:22:32.365342 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:32.365371 kubelet[2729]: W0906 01:22:32.365371 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:32.365472 kubelet[2729]: E0906 01:22:32.365388 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:32.367775 kubelet[2729]: E0906 01:22:32.367748 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:32.367775 kubelet[2729]: W0906 01:22:32.367770 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:32.367871 kubelet[2729]: E0906 01:22:32.367784 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:32.400830 env[1586]: time="2025-09-06T01:22:32.400692244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-58458b4499-rwghs,Uid:809ca197-053c-4961-b356-592598e26c75,Namespace:calico-system,Attempt:0,} returns sandbox id \"2004cb6c9e9438234edd63570a9eaea738823039960a0d68ba014c09af21c854\"" Sep 6 01:22:32.403394 env[1586]: time="2025-09-06T01:22:32.403352446Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 6 01:22:32.427277 env[1586]: time="2025-09-06T01:22:32.424556381Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:22:32.427277 env[1586]: time="2025-09-06T01:22:32.424604981Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:22:32.427277 env[1586]: time="2025-09-06T01:22:32.424615101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:22:32.427277 env[1586]: time="2025-09-06T01:22:32.424744781Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ea6b2c916ce86a429412c8e207cad746b650d3fd0f9ea5feca075d785602cc7f pid=3194 runtime=io.containerd.runc.v2 Sep 6 01:22:32.464345 kubelet[2729]: E0906 01:22:32.464315 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:32.464345 kubelet[2729]: W0906 01:22:32.464345 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:32.464508 kubelet[2729]: E0906 01:22:32.464365 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:32.470456 kubelet[2729]: E0906 01:22:32.467585 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:32.470456 kubelet[2729]: W0906 01:22:32.467603 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:32.470456 kubelet[2729]: E0906 01:22:32.467630 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:32.472352 kubelet[2729]: E0906 01:22:32.472329 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:32.472352 kubelet[2729]: W0906 01:22:32.472345 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:32.472552 kubelet[2729]: E0906 01:22:32.472449 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:32.472648 kubelet[2729]: E0906 01:22:32.472629 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:32.472648 kubelet[2729]: W0906 01:22:32.472643 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:32.472834 kubelet[2729]: E0906 01:22:32.472753 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:32.473479 kubelet[2729]: E0906 01:22:32.473457 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:32.473479 kubelet[2729]: W0906 01:22:32.473476 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:32.474128 kubelet[2729]: E0906 01:22:32.473637 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:32.475397 kubelet[2729]: E0906 01:22:32.475374 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:32.475397 kubelet[2729]: W0906 01:22:32.475395 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:32.475561 kubelet[2729]: E0906 01:22:32.475505 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:32.475672 kubelet[2729]: E0906 01:22:32.475640 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:32.475672 kubelet[2729]: W0906 01:22:32.475670 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:32.476633 kubelet[2729]: E0906 01:22:32.475750 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:32.476915 kubelet[2729]: E0906 01:22:32.476890 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:32.476915 kubelet[2729]: W0906 01:22:32.476909 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:32.477158 kubelet[2729]: E0906 01:22:32.477073 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:32.477306 kubelet[2729]: E0906 01:22:32.477289 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:32.477306 kubelet[2729]: W0906 01:22:32.477305 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:32.477459 kubelet[2729]: E0906 01:22:32.477387 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:32.477551 kubelet[2729]: E0906 01:22:32.477530 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:32.477551 kubelet[2729]: W0906 01:22:32.477546 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:32.477749 kubelet[2729]: E0906 01:22:32.477635 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:32.477844 kubelet[2729]: E0906 01:22:32.477829 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:32.477890 kubelet[2729]: W0906 01:22:32.477846 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:32.477922 kubelet[2729]: E0906 01:22:32.477910 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:32.478060 kubelet[2729]: E0906 01:22:32.478046 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:32.478060 kubelet[2729]: W0906 01:22:32.478057 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:32.478135 kubelet[2729]: E0906 01:22:32.478122 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:32.478292 kubelet[2729]: E0906 01:22:32.478279 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:32.478292 kubelet[2729]: W0906 01:22:32.478289 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:32.478368 kubelet[2729]: E0906 01:22:32.478301 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:32.478687 kubelet[2729]: E0906 01:22:32.478667 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:32.478687 kubelet[2729]: W0906 01:22:32.478682 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:32.478882 kubelet[2729]: E0906 01:22:32.478785 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:32.478985 kubelet[2729]: E0906 01:22:32.478969 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:32.478985 kubelet[2729]: W0906 01:22:32.478981 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:32.479065 kubelet[2729]: E0906 01:22:32.479045 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:32.479202 kubelet[2729]: E0906 01:22:32.479189 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:32.479202 kubelet[2729]: W0906 01:22:32.479199 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:32.479285 kubelet[2729]: E0906 01:22:32.479265 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:32.479408 kubelet[2729]: E0906 01:22:32.479391 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:32.479408 kubelet[2729]: W0906 01:22:32.479402 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:32.479535 kubelet[2729]: E0906 01:22:32.479464 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:32.479625 kubelet[2729]: E0906 01:22:32.479610 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:32.479625 kubelet[2729]: W0906 01:22:32.479620 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:32.479816 kubelet[2729]: E0906 01:22:32.479721 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:32.479910 kubelet[2729]: E0906 01:22:32.479897 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:32.479910 kubelet[2729]: W0906 01:22:32.479908 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:32.479981 kubelet[2729]: E0906 01:22:32.479919 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:32.480120 kubelet[2729]: E0906 01:22:32.480106 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:32.480120 kubelet[2729]: W0906 01:22:32.480117 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:32.480200 kubelet[2729]: E0906 01:22:32.480128 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:32.480351 kubelet[2729]: E0906 01:22:32.480333 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:32.480351 kubelet[2729]: W0906 01:22:32.480346 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:32.480442 kubelet[2729]: E0906 01:22:32.480359 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:32.480599 kubelet[2729]: E0906 01:22:32.480583 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:32.480599 kubelet[2729]: W0906 01:22:32.480595 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:32.480683 kubelet[2729]: E0906 01:22:32.480656 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:32.480825 kubelet[2729]: E0906 01:22:32.480810 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:32.480825 kubelet[2729]: W0906 01:22:32.480821 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:32.486432 kubelet[2729]: E0906 01:22:32.482506 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:32.486432 kubelet[2729]: E0906 01:22:32.482592 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:32.486432 kubelet[2729]: W0906 01:22:32.482602 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:32.486432 kubelet[2729]: E0906 01:22:32.482618 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:32.496319 kubelet[2729]: E0906 01:22:32.496010 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:32.496319 kubelet[2729]: W0906 01:22:32.496030 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:32.496319 kubelet[2729]: E0906 01:22:32.496048 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:32.497217 env[1586]: time="2025-09-06T01:22:32.497176955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2xsxz,Uid:bad9dc96-d631-43af-b3cd-7e0e32396ff9,Namespace:calico-system,Attempt:0,} returns sandbox id \"ea6b2c916ce86a429412c8e207cad746b650d3fd0f9ea5feca075d785602cc7f\"" Sep 6 01:22:32.507837 kubelet[2729]: E0906 01:22:32.507816 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:32.508007 kubelet[2729]: W0906 01:22:32.507992 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:32.508073 kubelet[2729]: E0906 01:22:32.508062 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:32.995000 audit[3255]: NETFILTER_CFG table=filter:100 family=2 entries=21 op=nft_register_rule pid=3255 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:22:32.995000 audit[3255]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffd5afa040 a2=0 a3=1 items=0 ppid=2834 pid=3255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:32.995000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:22:32.999000 audit[3255]: NETFILTER_CFG table=nat:101 family=2 entries=12 op=nft_register_rule pid=3255 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:22:32.999000 audit[3255]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd5afa040 a2=0 a3=1 items=0 ppid=2834 pid=3255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:32.999000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:22:33.736830 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2984129007.mount: Deactivated successfully. Sep 6 01:22:34.272632 kubelet[2729]: E0906 01:22:34.272274 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r5mms" podUID="d9edb798-4304-4b76-a60a-df0eaa0d87c0" Sep 6 01:22:34.576826 env[1586]: time="2025-09-06T01:22:34.576701418Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:22:34.586830 env[1586]: time="2025-09-06T01:22:34.586784785Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:22:34.592423 env[1586]: time="2025-09-06T01:22:34.592376589Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:22:34.598326 env[1586]: time="2025-09-06T01:22:34.598286593Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:22:34.598898 env[1586]: time="2025-09-06T01:22:34.598864433Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 6 01:22:34.607012 env[1586]: time="2025-09-06T01:22:34.606350599Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 6 01:22:34.615294 env[1586]: time="2025-09-06T01:22:34.615221125Z" level=info msg="CreateContainer within sandbox \"2004cb6c9e9438234edd63570a9eaea738823039960a0d68ba014c09af21c854\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 6 01:22:34.657110 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3355435968.mount: Deactivated successfully. Sep 6 01:22:34.683894 env[1586]: time="2025-09-06T01:22:34.683838014Z" level=info msg="CreateContainer within sandbox \"2004cb6c9e9438234edd63570a9eaea738823039960a0d68ba014c09af21c854\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"5075a92e0ffd92f1f428a7aae2e070ea4860e5f5f788b004756932f8ed28a8fd\"" Sep 6 01:22:34.685722 env[1586]: time="2025-09-06T01:22:34.684319054Z" level=info msg="StartContainer for \"5075a92e0ffd92f1f428a7aae2e070ea4860e5f5f788b004756932f8ed28a8fd\"" Sep 6 01:22:34.741873 env[1586]: time="2025-09-06T01:22:34.741818375Z" level=info msg="StartContainer for \"5075a92e0ffd92f1f428a7aae2e070ea4860e5f5f788b004756932f8ed28a8fd\" returns successfully" Sep 6 01:22:35.427555 kubelet[2729]: I0906 01:22:35.427480 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-58458b4499-rwghs" podStartSLOduration=2.230366589 podStartE2EDuration="4.427464578s" podCreationTimestamp="2025-09-06 01:22:31 +0000 UTC" firstStartedPulling="2025-09-06 01:22:32.402954245 +0000 UTC m=+22.291058455" lastFinishedPulling="2025-09-06 01:22:34.600052274 +0000 UTC m=+24.488156444" observedRunningTime="2025-09-06 01:22:35.427448538 +0000 UTC m=+25.315552748" watchObservedRunningTime="2025-09-06 01:22:35.427464578 +0000 UTC m=+25.315568788" Sep 6 01:22:35.483044 kubelet[2729]: E0906 01:22:35.483009 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:35.483044 kubelet[2729]: W0906 01:22:35.483034 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:35.483205 kubelet[2729]: E0906 01:22:35.483053 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:35.483272 kubelet[2729]: E0906 01:22:35.483234 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:35.483272 kubelet[2729]: W0906 01:22:35.483270 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:35.483334 kubelet[2729]: E0906 01:22:35.483280 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:35.483446 kubelet[2729]: E0906 01:22:35.483429 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:35.483446 kubelet[2729]: W0906 01:22:35.483443 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:35.483501 kubelet[2729]: E0906 01:22:35.483451 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:35.483619 kubelet[2729]: E0906 01:22:35.483598 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:35.483619 kubelet[2729]: W0906 01:22:35.483616 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:35.483690 kubelet[2729]: E0906 01:22:35.483625 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:35.483859 kubelet[2729]: E0906 01:22:35.483842 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:35.483859 kubelet[2729]: W0906 01:22:35.483856 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:35.483944 kubelet[2729]: E0906 01:22:35.483866 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:35.484026 kubelet[2729]: E0906 01:22:35.484009 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:35.484026 kubelet[2729]: W0906 01:22:35.484021 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:35.484090 kubelet[2729]: E0906 01:22:35.484030 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:35.484180 kubelet[2729]: E0906 01:22:35.484164 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:35.484180 kubelet[2729]: W0906 01:22:35.484176 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:35.484255 kubelet[2729]: E0906 01:22:35.484184 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:35.484346 kubelet[2729]: E0906 01:22:35.484330 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:35.484346 kubelet[2729]: W0906 01:22:35.484343 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:35.484406 kubelet[2729]: E0906 01:22:35.484351 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:35.484508 kubelet[2729]: E0906 01:22:35.484490 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:35.484508 kubelet[2729]: W0906 01:22:35.484504 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:35.484568 kubelet[2729]: E0906 01:22:35.484512 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:35.484642 kubelet[2729]: E0906 01:22:35.484625 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:35.484675 kubelet[2729]: W0906 01:22:35.484643 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:35.484675 kubelet[2729]: E0906 01:22:35.484651 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:35.484777 kubelet[2729]: E0906 01:22:35.484762 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:35.484805 kubelet[2729]: W0906 01:22:35.484779 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:35.484805 kubelet[2729]: E0906 01:22:35.484787 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:35.484923 kubelet[2729]: E0906 01:22:35.484907 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:35.484955 kubelet[2729]: W0906 01:22:35.484924 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:35.484955 kubelet[2729]: E0906 01:22:35.484933 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:35.485069 kubelet[2729]: E0906 01:22:35.485054 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:35.485095 kubelet[2729]: W0906 01:22:35.485070 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:35.485095 kubelet[2729]: E0906 01:22:35.485078 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:35.485211 kubelet[2729]: E0906 01:22:35.485196 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:35.485251 kubelet[2729]: W0906 01:22:35.485213 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:35.485251 kubelet[2729]: E0906 01:22:35.485221 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:35.485393 kubelet[2729]: E0906 01:22:35.485367 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:35.485393 kubelet[2729]: W0906 01:22:35.485380 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:35.485458 kubelet[2729]: E0906 01:22:35.485388 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:35.494738 kubelet[2729]: E0906 01:22:35.494714 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:35.494738 kubelet[2729]: W0906 01:22:35.494731 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:35.494738 kubelet[2729]: E0906 01:22:35.494742 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:35.494940 kubelet[2729]: E0906 01:22:35.494920 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:35.494940 kubelet[2729]: W0906 01:22:35.494935 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:35.495015 kubelet[2729]: E0906 01:22:35.494992 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:35.495206 kubelet[2729]: E0906 01:22:35.495187 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:35.495206 kubelet[2729]: W0906 01:22:35.495201 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:35.495307 kubelet[2729]: E0906 01:22:35.495214 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:35.495416 kubelet[2729]: E0906 01:22:35.495397 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:35.495416 kubelet[2729]: W0906 01:22:35.495412 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:35.495470 kubelet[2729]: E0906 01:22:35.495424 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:35.495582 kubelet[2729]: E0906 01:22:35.495568 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:35.495582 kubelet[2729]: W0906 01:22:35.495581 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:35.495633 kubelet[2729]: E0906 01:22:35.495589 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:35.495718 kubelet[2729]: E0906 01:22:35.495703 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:35.495718 kubelet[2729]: W0906 01:22:35.495716 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:35.495800 kubelet[2729]: E0906 01:22:35.495726 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:35.495905 kubelet[2729]: E0906 01:22:35.495889 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:35.495905 kubelet[2729]: W0906 01:22:35.495903 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:35.495948 kubelet[2729]: E0906 01:22:35.495913 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:35.496430 kubelet[2729]: E0906 01:22:35.496410 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:35.496430 kubelet[2729]: W0906 01:22:35.496426 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:35.496515 kubelet[2729]: E0906 01:22:35.496440 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:35.496604 kubelet[2729]: E0906 01:22:35.496587 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:35.496604 kubelet[2729]: W0906 01:22:35.496600 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:35.496674 kubelet[2729]: E0906 01:22:35.496659 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:35.496756 kubelet[2729]: E0906 01:22:35.496743 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:35.496756 kubelet[2729]: W0906 01:22:35.496756 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:35.496822 kubelet[2729]: E0906 01:22:35.496808 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:35.496906 kubelet[2729]: E0906 01:22:35.496892 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:35.496906 kubelet[2729]: W0906 01:22:35.496904 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:35.496951 kubelet[2729]: E0906 01:22:35.496913 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:35.497058 kubelet[2729]: E0906 01:22:35.497043 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:35.497058 kubelet[2729]: W0906 01:22:35.497056 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:35.497110 kubelet[2729]: E0906 01:22:35.497066 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:35.497295 kubelet[2729]: E0906 01:22:35.497271 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:35.497295 kubelet[2729]: W0906 01:22:35.497283 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:35.497295 kubelet[2729]: E0906 01:22:35.497295 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:35.497579 kubelet[2729]: E0906 01:22:35.497563 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:35.497579 kubelet[2729]: W0906 01:22:35.497576 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:35.497645 kubelet[2729]: E0906 01:22:35.497586 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:35.497745 kubelet[2729]: E0906 01:22:35.497728 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:35.497745 kubelet[2729]: W0906 01:22:35.497740 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:35.497799 kubelet[2729]: E0906 01:22:35.497750 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:35.497879 kubelet[2729]: E0906 01:22:35.497864 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:35.497879 kubelet[2729]: W0906 01:22:35.497877 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:35.497934 kubelet[2729]: E0906 01:22:35.497884 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:35.498026 kubelet[2729]: E0906 01:22:35.498012 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:35.498064 kubelet[2729]: W0906 01:22:35.498050 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:35.498091 kubelet[2729]: E0906 01:22:35.498064 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:35.498439 kubelet[2729]: E0906 01:22:35.498403 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:22:35.498439 kubelet[2729]: W0906 01:22:35.498435 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:22:35.498519 kubelet[2729]: E0906 01:22:35.498445 2729 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:22:36.059026 env[1586]: time="2025-09-06T01:22:36.058978260Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:22:36.070831 env[1586]: time="2025-09-06T01:22:36.070791828Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:22:36.078556 env[1586]: time="2025-09-06T01:22:36.078529233Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:22:36.083021 env[1586]: time="2025-09-06T01:22:36.082986196Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:22:36.083592 env[1586]: time="2025-09-06T01:22:36.083558157Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 6 01:22:36.086549 env[1586]: time="2025-09-06T01:22:36.086514959Z" level=info msg="CreateContainer within sandbox \"ea6b2c916ce86a429412c8e207cad746b650d3fd0f9ea5feca075d785602cc7f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 6 01:22:36.126344 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2477579652.mount: Deactivated successfully. Sep 6 01:22:36.153850 env[1586]: time="2025-09-06T01:22:36.153786165Z" level=info msg="CreateContainer within sandbox \"ea6b2c916ce86a429412c8e207cad746b650d3fd0f9ea5feca075d785602cc7f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e5a478c0e514c0dd8872d81d88f377fe067fa60db93b72a48cbb006379eb117a\"" Sep 6 01:22:36.154907 env[1586]: time="2025-09-06T01:22:36.154878526Z" level=info msg="StartContainer for \"e5a478c0e514c0dd8872d81d88f377fe067fa60db93b72a48cbb006379eb117a\"" Sep 6 01:22:36.216065 env[1586]: time="2025-09-06T01:22:36.216022568Z" level=info msg="StartContainer for \"e5a478c0e514c0dd8872d81d88f377fe067fa60db93b72a48cbb006379eb117a\" returns successfully" Sep 6 01:22:36.271605 kubelet[2729]: E0906 01:22:36.271554 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r5mms" podUID="d9edb798-4304-4b76-a60a-df0eaa0d87c0" Sep 6 01:22:36.786663 kubelet[2729]: I0906 01:22:36.418629 2729 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 6 01:22:36.604643 systemd[1]: run-containerd-runc-k8s.io-e5a478c0e514c0dd8872d81d88f377fe067fa60db93b72a48cbb006379eb117a-runc.GRMekJ.mount: Deactivated successfully. Sep 6 01:22:36.604775 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e5a478c0e514c0dd8872d81d88f377fe067fa60db93b72a48cbb006379eb117a-rootfs.mount: Deactivated successfully. Sep 6 01:22:36.801526 env[1586]: time="2025-09-06T01:22:36.801483252Z" level=info msg="shim disconnected" id=e5a478c0e514c0dd8872d81d88f377fe067fa60db93b72a48cbb006379eb117a Sep 6 01:22:36.801748 env[1586]: time="2025-09-06T01:22:36.801731412Z" level=warning msg="cleaning up after shim disconnected" id=e5a478c0e514c0dd8872d81d88f377fe067fa60db93b72a48cbb006379eb117a namespace=k8s.io Sep 6 01:22:36.801830 env[1586]: time="2025-09-06T01:22:36.801816492Z" level=info msg="cleaning up dead shim" Sep 6 01:22:36.809592 env[1586]: time="2025-09-06T01:22:36.809555777Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:22:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3388 runtime=io.containerd.runc.v2\n" Sep 6 01:22:37.423872 env[1586]: time="2025-09-06T01:22:37.423821516Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 6 01:22:38.272603 kubelet[2729]: E0906 01:22:38.272564 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r5mms" podUID="d9edb798-4304-4b76-a60a-df0eaa0d87c0" Sep 6 01:22:40.271938 kubelet[2729]: E0906 01:22:40.271616 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r5mms" podUID="d9edb798-4304-4b76-a60a-df0eaa0d87c0" Sep 6 01:22:40.325489 env[1586]: time="2025-09-06T01:22:40.325454807Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:22:40.336815 env[1586]: time="2025-09-06T01:22:40.336754254Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:22:40.341159 env[1586]: time="2025-09-06T01:22:40.341132857Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:22:40.346548 env[1586]: time="2025-09-06T01:22:40.346513140Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:22:40.347201 env[1586]: time="2025-09-06T01:22:40.347171781Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 6 01:22:40.350284 env[1586]: time="2025-09-06T01:22:40.350201543Z" level=info msg="CreateContainer within sandbox \"ea6b2c916ce86a429412c8e207cad746b650d3fd0f9ea5feca075d785602cc7f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 6 01:22:40.385761 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1970304671.mount: Deactivated successfully. Sep 6 01:22:40.410564 env[1586]: time="2025-09-06T01:22:40.410518182Z" level=info msg="CreateContainer within sandbox \"ea6b2c916ce86a429412c8e207cad746b650d3fd0f9ea5feca075d785602cc7f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"165026f3e5f1bdb6c6256b8c10d2cdcd74bc1203707add78912705dd7cbea93d\"" Sep 6 01:22:40.412401 env[1586]: time="2025-09-06T01:22:40.411161382Z" level=info msg="StartContainer for \"165026f3e5f1bdb6c6256b8c10d2cdcd74bc1203707add78912705dd7cbea93d\"" Sep 6 01:22:40.493145 env[1586]: time="2025-09-06T01:22:40.493088996Z" level=info msg="StartContainer for \"165026f3e5f1bdb6c6256b8c10d2cdcd74bc1203707add78912705dd7cbea93d\" returns successfully" Sep 6 01:22:41.383326 systemd[1]: run-containerd-runc-k8s.io-165026f3e5f1bdb6c6256b8c10d2cdcd74bc1203707add78912705dd7cbea93d-runc.H7LsXM.mount: Deactivated successfully. Sep 6 01:22:41.735587 env[1586]: time="2025-09-06T01:22:41.735534516Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 01:22:41.754648 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-165026f3e5f1bdb6c6256b8c10d2cdcd74bc1203707add78912705dd7cbea93d-rootfs.mount: Deactivated successfully. Sep 6 01:22:41.763563 kubelet[2729]: I0906 01:22:41.763535 2729 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 6 01:22:41.835231 kubelet[2729]: I0906 01:22:41.835187 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xltvt\" (UniqueName: \"kubernetes.io/projected/7e1ef8a5-5849-44eb-8c06-2ea19305f74d-kube-api-access-xltvt\") pod \"coredns-7c65d6cfc9-d2dkt\" (UID: \"7e1ef8a5-5849-44eb-8c06-2ea19305f74d\") " pod="kube-system/coredns-7c65d6cfc9-d2dkt" Sep 6 01:22:41.835231 kubelet[2729]: I0906 01:22:41.835229 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntg5m\" (UniqueName: \"kubernetes.io/projected/75a2fc54-827a-4281-a019-abda9f06779c-kube-api-access-ntg5m\") pod \"calico-apiserver-56cb94fc6-989l7\" (UID: \"75a2fc54-827a-4281-a019-abda9f06779c\") " pod="calico-apiserver/calico-apiserver-56cb94fc6-989l7" Sep 6 01:22:41.835429 kubelet[2729]: I0906 01:22:41.835268 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f4c05567-6639-4b0f-94fa-e71542024f21-tigera-ca-bundle\") pod \"calico-kube-controllers-9fd44b5cd-ggmhh\" (UID: \"f4c05567-6639-4b0f-94fa-e71542024f21\") " pod="calico-system/calico-kube-controllers-9fd44b5cd-ggmhh" Sep 6 01:22:41.835429 kubelet[2729]: I0906 01:22:41.835286 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wrtp\" (UniqueName: \"kubernetes.io/projected/f5d38221-ee04-42c6-b763-9c6f6f204114-kube-api-access-5wrtp\") pod \"calico-apiserver-56cb94fc6-8w5rf\" (UID: \"f5d38221-ee04-42c6-b763-9c6f6f204114\") " pod="calico-apiserver/calico-apiserver-56cb94fc6-8w5rf" Sep 6 01:22:41.835429 kubelet[2729]: I0906 01:22:41.835302 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c7f8a4f-c635-4c57-9d1e-e96d0df387cd-whisker-ca-bundle\") pod \"whisker-d784c57d5-vskf8\" (UID: \"9c7f8a4f-c635-4c57-9d1e-e96d0df387cd\") " pod="calico-system/whisker-d784c57d5-vskf8" Sep 6 01:22:41.835429 kubelet[2729]: I0906 01:22:41.835319 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/6be53ee4-d329-4ede-8f50-6b8eeba7191c-goldmane-key-pair\") pod \"goldmane-7988f88666-nn22p\" (UID: \"6be53ee4-d329-4ede-8f50-6b8eeba7191c\") " pod="calico-system/goldmane-7988f88666-nn22p" Sep 6 01:22:41.835429 kubelet[2729]: I0906 01:22:41.835337 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c849x\" (UniqueName: \"kubernetes.io/projected/6be53ee4-d329-4ede-8f50-6b8eeba7191c-kube-api-access-c849x\") pod \"goldmane-7988f88666-nn22p\" (UID: \"6be53ee4-d329-4ede-8f50-6b8eeba7191c\") " pod="calico-system/goldmane-7988f88666-nn22p" Sep 6 01:22:41.835552 kubelet[2729]: I0906 01:22:41.835354 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/75a2fc54-827a-4281-a019-abda9f06779c-calico-apiserver-certs\") pod \"calico-apiserver-56cb94fc6-989l7\" (UID: \"75a2fc54-827a-4281-a019-abda9f06779c\") " pod="calico-apiserver/calico-apiserver-56cb94fc6-989l7" Sep 6 01:22:41.835552 kubelet[2729]: I0906 01:22:41.835373 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a84708b2-35a4-42bb-8dac-86ea0f5ddee1-config-volume\") pod \"coredns-7c65d6cfc9-5pdmz\" (UID: \"a84708b2-35a4-42bb-8dac-86ea0f5ddee1\") " pod="kube-system/coredns-7c65d6cfc9-5pdmz" Sep 6 01:22:41.835552 kubelet[2729]: I0906 01:22:41.835389 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkbgw\" (UniqueName: \"kubernetes.io/projected/a84708b2-35a4-42bb-8dac-86ea0f5ddee1-kube-api-access-zkbgw\") pod \"coredns-7c65d6cfc9-5pdmz\" (UID: \"a84708b2-35a4-42bb-8dac-86ea0f5ddee1\") " pod="kube-system/coredns-7c65d6cfc9-5pdmz" Sep 6 01:22:41.835552 kubelet[2729]: I0906 01:22:41.835407 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6be53ee4-d329-4ede-8f50-6b8eeba7191c-config\") pod \"goldmane-7988f88666-nn22p\" (UID: \"6be53ee4-d329-4ede-8f50-6b8eeba7191c\") " pod="calico-system/goldmane-7988f88666-nn22p" Sep 6 01:22:41.835552 kubelet[2729]: I0906 01:22:41.835427 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f5d38221-ee04-42c6-b763-9c6f6f204114-calico-apiserver-certs\") pod \"calico-apiserver-56cb94fc6-8w5rf\" (UID: \"f5d38221-ee04-42c6-b763-9c6f6f204114\") " pod="calico-apiserver/calico-apiserver-56cb94fc6-8w5rf" Sep 6 01:22:41.835676 kubelet[2729]: I0906 01:22:41.835446 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cltk\" (UniqueName: \"kubernetes.io/projected/9c7f8a4f-c635-4c57-9d1e-e96d0df387cd-kube-api-access-8cltk\") pod \"whisker-d784c57d5-vskf8\" (UID: \"9c7f8a4f-c635-4c57-9d1e-e96d0df387cd\") " pod="calico-system/whisker-d784c57d5-vskf8" Sep 6 01:22:41.835676 kubelet[2729]: I0906 01:22:41.835461 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4khxh\" (UniqueName: \"kubernetes.io/projected/f4c05567-6639-4b0f-94fa-e71542024f21-kube-api-access-4khxh\") pod \"calico-kube-controllers-9fd44b5cd-ggmhh\" (UID: \"f4c05567-6639-4b0f-94fa-e71542024f21\") " pod="calico-system/calico-kube-controllers-9fd44b5cd-ggmhh" Sep 6 01:22:41.835676 kubelet[2729]: I0906 01:22:41.835479 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e1ef8a5-5849-44eb-8c06-2ea19305f74d-config-volume\") pod \"coredns-7c65d6cfc9-d2dkt\" (UID: \"7e1ef8a5-5849-44eb-8c06-2ea19305f74d\") " pod="kube-system/coredns-7c65d6cfc9-d2dkt" Sep 6 01:22:41.835676 kubelet[2729]: I0906 01:22:41.835518 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9c7f8a4f-c635-4c57-9d1e-e96d0df387cd-whisker-backend-key-pair\") pod \"whisker-d784c57d5-vskf8\" (UID: \"9c7f8a4f-c635-4c57-9d1e-e96d0df387cd\") " pod="calico-system/whisker-d784c57d5-vskf8" Sep 6 01:22:41.835676 kubelet[2729]: I0906 01:22:41.835536 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6be53ee4-d329-4ede-8f50-6b8eeba7191c-goldmane-ca-bundle\") pod \"goldmane-7988f88666-nn22p\" (UID: \"6be53ee4-d329-4ede-8f50-6b8eeba7191c\") " pod="calico-system/goldmane-7988f88666-nn22p" Sep 6 01:22:42.554712 env[1586]: time="2025-09-06T01:22:42.549568272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r5mms,Uid:d9edb798-4304-4b76-a60a-df0eaa0d87c0,Namespace:calico-system,Attempt:0,}" Sep 6 01:22:42.633594 env[1586]: time="2025-09-06T01:22:42.633263205Z" level=info msg="shim disconnected" id=165026f3e5f1bdb6c6256b8c10d2cdcd74bc1203707add78912705dd7cbea93d Sep 6 01:22:42.633594 env[1586]: time="2025-09-06T01:22:42.633309245Z" level=warning msg="cleaning up after shim disconnected" id=165026f3e5f1bdb6c6256b8c10d2cdcd74bc1203707add78912705dd7cbea93d namespace=k8s.io Sep 6 01:22:42.633594 env[1586]: time="2025-09-06T01:22:42.633317645Z" level=info msg="cleaning up dead shim" Sep 6 01:22:42.639864 env[1586]: time="2025-09-06T01:22:42.639827209Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:22:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3474 runtime=io.containerd.runc.v2\n" Sep 6 01:22:42.708896 env[1586]: time="2025-09-06T01:22:42.708857132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-5pdmz,Uid:a84708b2-35a4-42bb-8dac-86ea0f5ddee1,Namespace:kube-system,Attempt:0,}" Sep 6 01:22:42.715979 env[1586]: time="2025-09-06T01:22:42.715935577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-d2dkt,Uid:7e1ef8a5-5849-44eb-8c06-2ea19305f74d,Namespace:kube-system,Attempt:0,}" Sep 6 01:22:42.719166 env[1586]: time="2025-09-06T01:22:42.719109779Z" level=error msg="Failed to destroy network for sandbox \"fb040c74039d975a5b72acbb13b204bfc8ce6207f7a737662699734322391953\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:22:42.719641 env[1586]: time="2025-09-06T01:22:42.719590139Z" level=error msg="encountered an error cleaning up failed sandbox \"fb040c74039d975a5b72acbb13b204bfc8ce6207f7a737662699734322391953\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:22:42.720370 env[1586]: time="2025-09-06T01:22:42.720337900Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r5mms,Uid:d9edb798-4304-4b76-a60a-df0eaa0d87c0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fb040c74039d975a5b72acbb13b204bfc8ce6207f7a737662699734322391953\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:22:42.720659 kubelet[2729]: E0906 01:22:42.720619 2729 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb040c74039d975a5b72acbb13b204bfc8ce6207f7a737662699734322391953\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:22:42.721089 kubelet[2729]: E0906 01:22:42.720696 2729 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb040c74039d975a5b72acbb13b204bfc8ce6207f7a737662699734322391953\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r5mms" Sep 6 01:22:42.721134 kubelet[2729]: E0906 01:22:42.721096 2729 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb040c74039d975a5b72acbb13b204bfc8ce6207f7a737662699734322391953\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r5mms" Sep 6 01:22:42.721178 kubelet[2729]: E0906 01:22:42.721141 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-r5mms_calico-system(d9edb798-4304-4b76-a60a-df0eaa0d87c0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-r5mms_calico-system(d9edb798-4304-4b76-a60a-df0eaa0d87c0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fb040c74039d975a5b72acbb13b204bfc8ce6207f7a737662699734322391953\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-r5mms" podUID="d9edb798-4304-4b76-a60a-df0eaa0d87c0" Sep 6 01:22:42.721481 env[1586]: time="2025-09-06T01:22:42.721430460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56cb94fc6-989l7,Uid:75a2fc54-827a-4281-a019-abda9f06779c,Namespace:calico-apiserver,Attempt:0,}" Sep 6 01:22:42.724975 env[1586]: time="2025-09-06T01:22:42.724944262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9fd44b5cd-ggmhh,Uid:f4c05567-6639-4b0f-94fa-e71542024f21,Namespace:calico-system,Attempt:0,}" Sep 6 01:22:42.727327 env[1586]: time="2025-09-06T01:22:42.727301984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-d784c57d5-vskf8,Uid:9c7f8a4f-c635-4c57-9d1e-e96d0df387cd,Namespace:calico-system,Attempt:0,}" Sep 6 01:22:42.734276 env[1586]: time="2025-09-06T01:22:42.734218388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-nn22p,Uid:6be53ee4-d329-4ede-8f50-6b8eeba7191c,Namespace:calico-system,Attempt:0,}" Sep 6 01:22:42.738151 env[1586]: time="2025-09-06T01:22:42.738123391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56cb94fc6-8w5rf,Uid:f5d38221-ee04-42c6-b763-9c6f6f204114,Namespace:calico-apiserver,Attempt:0,}" Sep 6 01:22:43.110491 env[1586]: time="2025-09-06T01:22:43.110433425Z" level=error msg="Failed to destroy network for sandbox \"d8c4b477ff7552087a7528ea03e1762470b021e831c99971fa8d2a4391cdac6f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:22:43.111445 env[1586]: time="2025-09-06T01:22:43.111404665Z" level=error msg="encountered an error cleaning up failed sandbox \"d8c4b477ff7552087a7528ea03e1762470b021e831c99971fa8d2a4391cdac6f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:22:43.111538 env[1586]: time="2025-09-06T01:22:43.111458465Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-5pdmz,Uid:a84708b2-35a4-42bb-8dac-86ea0f5ddee1,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d8c4b477ff7552087a7528ea03e1762470b021e831c99971fa8d2a4391cdac6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:22:43.111960 kubelet[2729]: E0906 01:22:43.111709 2729 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8c4b477ff7552087a7528ea03e1762470b021e831c99971fa8d2a4391cdac6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:22:43.111960 kubelet[2729]: E0906 01:22:43.111770 2729 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8c4b477ff7552087a7528ea03e1762470b021e831c99971fa8d2a4391cdac6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-5pdmz" Sep 6 01:22:43.111960 kubelet[2729]: E0906 01:22:43.111790 2729 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8c4b477ff7552087a7528ea03e1762470b021e831c99971fa8d2a4391cdac6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-5pdmz" Sep 6 01:22:43.113780 kubelet[2729]: E0906 01:22:43.111826 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-5pdmz_kube-system(a84708b2-35a4-42bb-8dac-86ea0f5ddee1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-5pdmz_kube-system(a84708b2-35a4-42bb-8dac-86ea0f5ddee1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d8c4b477ff7552087a7528ea03e1762470b021e831c99971fa8d2a4391cdac6f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-5pdmz" podUID="a84708b2-35a4-42bb-8dac-86ea0f5ddee1" Sep 6 01:22:43.181125 env[1586]: time="2025-09-06T01:22:43.181066469Z" level=error msg="Failed to destroy network for sandbox \"d6dd37adf893ea15f62d8619dd0427b888e8647e5f30926a70b3a493561fd1a1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:22:43.181440 env[1586]: time="2025-09-06T01:22:43.181408389Z" level=error msg="encountered an error cleaning up failed sandbox \"d6dd37adf893ea15f62d8619dd0427b888e8647e5f30926a70b3a493561fd1a1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:22:43.181491 env[1586]: time="2025-09-06T01:22:43.181455669Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-d2dkt,Uid:7e1ef8a5-5849-44eb-8c06-2ea19305f74d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d6dd37adf893ea15f62d8619dd0427b888e8647e5f30926a70b3a493561fd1a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:22:43.182597 kubelet[2729]: E0906 01:22:43.181648 2729 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6dd37adf893ea15f62d8619dd0427b888e8647e5f30926a70b3a493561fd1a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:22:43.182597 kubelet[2729]: E0906 01:22:43.181704 2729 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6dd37adf893ea15f62d8619dd0427b888e8647e5f30926a70b3a493561fd1a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-d2dkt" Sep 6 01:22:43.182597 kubelet[2729]: E0906 01:22:43.181722 2729 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6dd37adf893ea15f62d8619dd0427b888e8647e5f30926a70b3a493561fd1a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-d2dkt" Sep 6 01:22:43.182754 kubelet[2729]: E0906 01:22:43.181758 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-d2dkt_kube-system(7e1ef8a5-5849-44eb-8c06-2ea19305f74d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-d2dkt_kube-system(7e1ef8a5-5849-44eb-8c06-2ea19305f74d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d6dd37adf893ea15f62d8619dd0427b888e8647e5f30926a70b3a493561fd1a1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-d2dkt" podUID="7e1ef8a5-5849-44eb-8c06-2ea19305f74d" Sep 6 01:22:43.216631 env[1586]: time="2025-09-06T01:22:43.216569611Z" level=error msg="Failed to destroy network for sandbox \"645064afc4d385ac4cb23154566f4dc71bbe72ec4a523283a85dc30873d7bab8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:22:43.217062 env[1586]: time="2025-09-06T01:22:43.217024171Z" level=error msg="encountered an error cleaning up failed sandbox \"645064afc4d385ac4cb23154566f4dc71bbe72ec4a523283a85dc30873d7bab8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:22:43.217105 env[1586]: time="2025-09-06T01:22:43.217086771Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-d784c57d5-vskf8,Uid:9c7f8a4f-c635-4c57-9d1e-e96d0df387cd,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"645064afc4d385ac4cb23154566f4dc71bbe72ec4a523283a85dc30873d7bab8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:22:43.218268 kubelet[2729]: E0906 01:22:43.217316 2729 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"645064afc4d385ac4cb23154566f4dc71bbe72ec4a523283a85dc30873d7bab8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:22:43.218268 kubelet[2729]: E0906 01:22:43.217372 2729 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"645064afc4d385ac4cb23154566f4dc71bbe72ec4a523283a85dc30873d7bab8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-d784c57d5-vskf8" Sep 6 01:22:43.218268 kubelet[2729]: E0906 01:22:43.217392 2729 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"645064afc4d385ac4cb23154566f4dc71bbe72ec4a523283a85dc30873d7bab8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-d784c57d5-vskf8" Sep 6 01:22:43.218444 kubelet[2729]: E0906 01:22:43.217430 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-d784c57d5-vskf8_calico-system(9c7f8a4f-c635-4c57-9d1e-e96d0df387cd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-d784c57d5-vskf8_calico-system(9c7f8a4f-c635-4c57-9d1e-e96d0df387cd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"645064afc4d385ac4cb23154566f4dc71bbe72ec4a523283a85dc30873d7bab8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-d784c57d5-vskf8" podUID="9c7f8a4f-c635-4c57-9d1e-e96d0df387cd" Sep 6 01:22:43.234651 env[1586]: time="2025-09-06T01:22:43.234574182Z" level=error msg="Failed to destroy network for sandbox \"ae2ef6667ff1b4ef29467318ee88ac32ab3d404e005c1556f2035af5a33aad64\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:22:43.235011 env[1586]: time="2025-09-06T01:22:43.234975822Z" level=error msg="encountered an error cleaning up failed sandbox \"ae2ef6667ff1b4ef29467318ee88ac32ab3d404e005c1556f2035af5a33aad64\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:22:43.235078 env[1586]: time="2025-09-06T01:22:43.235026102Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56cb94fc6-989l7,Uid:75a2fc54-827a-4281-a019-abda9f06779c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ae2ef6667ff1b4ef29467318ee88ac32ab3d404e005c1556f2035af5a33aad64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:22:43.236067 kubelet[2729]: E0906 01:22:43.235266 2729 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae2ef6667ff1b4ef29467318ee88ac32ab3d404e005c1556f2035af5a33aad64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:22:43.236067 kubelet[2729]: E0906 01:22:43.235318 2729 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae2ef6667ff1b4ef29467318ee88ac32ab3d404e005c1556f2035af5a33aad64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-56cb94fc6-989l7" Sep 6 01:22:43.236067 kubelet[2729]: E0906 01:22:43.235335 2729 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae2ef6667ff1b4ef29467318ee88ac32ab3d404e005c1556f2035af5a33aad64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-56cb94fc6-989l7" Sep 6 01:22:43.236208 kubelet[2729]: E0906 01:22:43.235375 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-56cb94fc6-989l7_calico-apiserver(75a2fc54-827a-4281-a019-abda9f06779c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-56cb94fc6-989l7_calico-apiserver(75a2fc54-827a-4281-a019-abda9f06779c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ae2ef6667ff1b4ef29467318ee88ac32ab3d404e005c1556f2035af5a33aad64\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-56cb94fc6-989l7" podUID="75a2fc54-827a-4281-a019-abda9f06779c" Sep 6 01:22:43.259114 env[1586]: time="2025-09-06T01:22:43.259062717Z" level=error msg="Failed to destroy network for sandbox \"f7907bc0909e4c74f245d687f3dff3e8ae6fb2038ccd21518dfaab8b7a11cbbd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:22:43.259564 env[1586]: time="2025-09-06T01:22:43.259532878Z" level=error msg="encountered an error cleaning up failed sandbox \"f7907bc0909e4c74f245d687f3dff3e8ae6fb2038ccd21518dfaab8b7a11cbbd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:22:43.259679 env[1586]: time="2025-09-06T01:22:43.259653558Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9fd44b5cd-ggmhh,Uid:f4c05567-6639-4b0f-94fa-e71542024f21,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f7907bc0909e4c74f245d687f3dff3e8ae6fb2038ccd21518dfaab8b7a11cbbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:22:43.260696 kubelet[2729]: E0906 01:22:43.259908 2729 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7907bc0909e4c74f245d687f3dff3e8ae6fb2038ccd21518dfaab8b7a11cbbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:22:43.260696 kubelet[2729]: E0906 01:22:43.259953 2729 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7907bc0909e4c74f245d687f3dff3e8ae6fb2038ccd21518dfaab8b7a11cbbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9fd44b5cd-ggmhh" Sep 6 01:22:43.260696 kubelet[2729]: E0906 01:22:43.259971 2729 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7907bc0909e4c74f245d687f3dff3e8ae6fb2038ccd21518dfaab8b7a11cbbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9fd44b5cd-ggmhh" Sep 6 01:22:43.260829 kubelet[2729]: E0906 01:22:43.260004 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-9fd44b5cd-ggmhh_calico-system(f4c05567-6639-4b0f-94fa-e71542024f21)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-9fd44b5cd-ggmhh_calico-system(f4c05567-6639-4b0f-94fa-e71542024f21)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f7907bc0909e4c74f245d687f3dff3e8ae6fb2038ccd21518dfaab8b7a11cbbd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-9fd44b5cd-ggmhh" podUID="f4c05567-6639-4b0f-94fa-e71542024f21" Sep 6 01:22:43.265054 env[1586]: time="2025-09-06T01:22:43.265007961Z" level=error msg="Failed to destroy network for sandbox \"8d640dc67cd9dc8e43167eab759ff83b7ac1cbdb35781beeef948e46087dd270\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:22:43.265371 env[1586]: time="2025-09-06T01:22:43.265337081Z" level=error msg="encountered an error cleaning up failed sandbox \"8d640dc67cd9dc8e43167eab759ff83b7ac1cbdb35781beeef948e46087dd270\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:22:43.265411 env[1586]: time="2025-09-06T01:22:43.265392881Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56cb94fc6-8w5rf,Uid:f5d38221-ee04-42c6-b763-9c6f6f204114,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8d640dc67cd9dc8e43167eab759ff83b7ac1cbdb35781beeef948e46087dd270\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:22:43.265688 kubelet[2729]: E0906 01:22:43.265541 2729 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d640dc67cd9dc8e43167eab759ff83b7ac1cbdb35781beeef948e46087dd270\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:22:43.265688 kubelet[2729]: E0906 01:22:43.265583 2729 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d640dc67cd9dc8e43167eab759ff83b7ac1cbdb35781beeef948e46087dd270\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-56cb94fc6-8w5rf" Sep 6 01:22:43.265688 kubelet[2729]: E0906 01:22:43.265600 2729 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d640dc67cd9dc8e43167eab759ff83b7ac1cbdb35781beeef948e46087dd270\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-56cb94fc6-8w5rf" Sep 6 01:22:43.265801 kubelet[2729]: E0906 01:22:43.265636 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-56cb94fc6-8w5rf_calico-apiserver(f5d38221-ee04-42c6-b763-9c6f6f204114)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-56cb94fc6-8w5rf_calico-apiserver(f5d38221-ee04-42c6-b763-9c6f6f204114)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8d640dc67cd9dc8e43167eab759ff83b7ac1cbdb35781beeef948e46087dd270\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-56cb94fc6-8w5rf" podUID="f5d38221-ee04-42c6-b763-9c6f6f204114" Sep 6 01:22:43.281336 env[1586]: time="2025-09-06T01:22:43.281285371Z" level=error msg="Failed to destroy network for sandbox \"23fead3f824bc1f5ea71579e672e205b84cb4ec276e9bd218bd80bfaea886227\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:22:43.281641 env[1586]: time="2025-09-06T01:22:43.281607531Z" level=error msg="encountered an error cleaning up failed sandbox \"23fead3f824bc1f5ea71579e672e205b84cb4ec276e9bd218bd80bfaea886227\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:22:43.281695 env[1586]: time="2025-09-06T01:22:43.281655531Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-nn22p,Uid:6be53ee4-d329-4ede-8f50-6b8eeba7191c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"23fead3f824bc1f5ea71579e672e205b84cb4ec276e9bd218bd80bfaea886227\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:22:43.282595 kubelet[2729]: E0906 01:22:43.281840 2729 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23fead3f824bc1f5ea71579e672e205b84cb4ec276e9bd218bd80bfaea886227\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:22:43.282595 kubelet[2729]: E0906 01:22:43.281884 2729 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23fead3f824bc1f5ea71579e672e205b84cb4ec276e9bd218bd80bfaea886227\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-nn22p" Sep 6 01:22:43.282595 kubelet[2729]: E0906 01:22:43.281899 2729 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23fead3f824bc1f5ea71579e672e205b84cb4ec276e9bd218bd80bfaea886227\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-nn22p" Sep 6 01:22:43.282721 kubelet[2729]: E0906 01:22:43.281927 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-nn22p_calico-system(6be53ee4-d329-4ede-8f50-6b8eeba7191c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-nn22p_calico-system(6be53ee4-d329-4ede-8f50-6b8eeba7191c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"23fead3f824bc1f5ea71579e672e205b84cb4ec276e9bd218bd80bfaea886227\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-nn22p" podUID="6be53ee4-d329-4ede-8f50-6b8eeba7191c" Sep 6 01:22:43.439287 kubelet[2729]: I0906 01:22:43.438287 2729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="645064afc4d385ac4cb23154566f4dc71bbe72ec4a523283a85dc30873d7bab8" Sep 6 01:22:43.439425 env[1586]: time="2025-09-06T01:22:43.439168749Z" level=info msg="StopPodSandbox for \"645064afc4d385ac4cb23154566f4dc71bbe72ec4a523283a85dc30873d7bab8\"" Sep 6 01:22:43.441139 kubelet[2729]: I0906 01:22:43.441110 2729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6dd37adf893ea15f62d8619dd0427b888e8647e5f30926a70b3a493561fd1a1" Sep 6 01:22:43.441738 env[1586]: time="2025-09-06T01:22:43.441703031Z" level=info msg="StopPodSandbox for \"d6dd37adf893ea15f62d8619dd0427b888e8647e5f30926a70b3a493561fd1a1\"" Sep 6 01:22:43.442635 kubelet[2729]: I0906 01:22:43.442554 2729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d640dc67cd9dc8e43167eab759ff83b7ac1cbdb35781beeef948e46087dd270" Sep 6 01:22:43.443143 env[1586]: time="2025-09-06T01:22:43.443105272Z" level=info msg="StopPodSandbox for \"8d640dc67cd9dc8e43167eab759ff83b7ac1cbdb35781beeef948e46087dd270\"" Sep 6 01:22:43.444550 kubelet[2729]: I0906 01:22:43.444252 2729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8c4b477ff7552087a7528ea03e1762470b021e831c99971fa8d2a4391cdac6f" Sep 6 01:22:43.444693 env[1586]: time="2025-09-06T01:22:43.444656473Z" level=info msg="StopPodSandbox for \"d8c4b477ff7552087a7528ea03e1762470b021e831c99971fa8d2a4391cdac6f\"" Sep 6 01:22:43.446891 kubelet[2729]: I0906 01:22:43.446585 2729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb040c74039d975a5b72acbb13b204bfc8ce6207f7a737662699734322391953" Sep 6 01:22:43.447002 env[1586]: time="2025-09-06T01:22:43.446968714Z" level=info msg="StopPodSandbox for \"fb040c74039d975a5b72acbb13b204bfc8ce6207f7a737662699734322391953\"" Sep 6 01:22:43.455008 kubelet[2729]: I0906 01:22:43.454637 2729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7907bc0909e4c74f245d687f3dff3e8ae6fb2038ccd21518dfaab8b7a11cbbd" Sep 6 01:22:43.455587 env[1586]: time="2025-09-06T01:22:43.455547680Z" level=info msg="StopPodSandbox for \"f7907bc0909e4c74f245d687f3dff3e8ae6fb2038ccd21518dfaab8b7a11cbbd\"" Sep 6 01:22:43.459063 kubelet[2729]: I0906 01:22:43.458728 2729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae2ef6667ff1b4ef29467318ee88ac32ab3d404e005c1556f2035af5a33aad64" Sep 6 01:22:43.459314 env[1586]: time="2025-09-06T01:22:43.459277602Z" level=info msg="StopPodSandbox for \"ae2ef6667ff1b4ef29467318ee88ac32ab3d404e005c1556f2035af5a33aad64\"" Sep 6 01:22:43.480190 env[1586]: time="2025-09-06T01:22:43.480148215Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 6 01:22:43.482147 kubelet[2729]: I0906 01:22:43.482108 2729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23fead3f824bc1f5ea71579e672e205b84cb4ec276e9bd218bd80bfaea886227" Sep 6 01:22:43.483607 env[1586]: time="2025-09-06T01:22:43.483555857Z" level=info msg="StopPodSandbox for \"23fead3f824bc1f5ea71579e672e205b84cb4ec276e9bd218bd80bfaea886227\"" Sep 6 01:22:43.532502 env[1586]: time="2025-09-06T01:22:43.531733807Z" level=error msg="StopPodSandbox for \"d6dd37adf893ea15f62d8619dd0427b888e8647e5f30926a70b3a493561fd1a1\" failed" error="failed to destroy network for sandbox \"d6dd37adf893ea15f62d8619dd0427b888e8647e5f30926a70b3a493561fd1a1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:22:43.532758 kubelet[2729]: E0906 01:22:43.531979 2729 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d6dd37adf893ea15f62d8619dd0427b888e8647e5f30926a70b3a493561fd1a1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d6dd37adf893ea15f62d8619dd0427b888e8647e5f30926a70b3a493561fd1a1" Sep 6 01:22:43.532758 kubelet[2729]: E0906 01:22:43.532047 2729 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d6dd37adf893ea15f62d8619dd0427b888e8647e5f30926a70b3a493561fd1a1"} Sep 6 01:22:43.532758 kubelet[2729]: E0906 01:22:43.532127 2729 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7e1ef8a5-5849-44eb-8c06-2ea19305f74d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d6dd37adf893ea15f62d8619dd0427b888e8647e5f30926a70b3a493561fd1a1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 6 01:22:43.532758 kubelet[2729]: E0906 01:22:43.532150 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7e1ef8a5-5849-44eb-8c06-2ea19305f74d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d6dd37adf893ea15f62d8619dd0427b888e8647e5f30926a70b3a493561fd1a1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-d2dkt" podUID="7e1ef8a5-5849-44eb-8c06-2ea19305f74d" Sep 6 01:22:43.533973 env[1586]: time="2025-09-06T01:22:43.533924648Z" level=error msg="StopPodSandbox for \"645064afc4d385ac4cb23154566f4dc71bbe72ec4a523283a85dc30873d7bab8\" failed" error="failed to destroy network for sandbox \"645064afc4d385ac4cb23154566f4dc71bbe72ec4a523283a85dc30873d7bab8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:22:43.534129 kubelet[2729]: E0906 01:22:43.534087 2729 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"645064afc4d385ac4cb23154566f4dc71bbe72ec4a523283a85dc30873d7bab8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="645064afc4d385ac4cb23154566f4dc71bbe72ec4a523283a85dc30873d7bab8" Sep 6 01:22:43.534182 kubelet[2729]: E0906 01:22:43.534131 2729 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"645064afc4d385ac4cb23154566f4dc71bbe72ec4a523283a85dc30873d7bab8"} Sep 6 01:22:43.534182 kubelet[2729]: E0906 01:22:43.534155 2729 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9c7f8a4f-c635-4c57-9d1e-e96d0df387cd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"645064afc4d385ac4cb23154566f4dc71bbe72ec4a523283a85dc30873d7bab8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 6 01:22:43.534280 kubelet[2729]: E0906 01:22:43.534185 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9c7f8a4f-c635-4c57-9d1e-e96d0df387cd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"645064afc4d385ac4cb23154566f4dc71bbe72ec4a523283a85dc30873d7bab8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-d784c57d5-vskf8" podUID="9c7f8a4f-c635-4c57-9d1e-e96d0df387cd" Sep 6 01:22:43.552125 env[1586]: time="2025-09-06T01:22:43.552061060Z" level=error msg="StopPodSandbox for \"d8c4b477ff7552087a7528ea03e1762470b021e831c99971fa8d2a4391cdac6f\" failed" error="failed to destroy network for sandbox \"d8c4b477ff7552087a7528ea03e1762470b021e831c99971fa8d2a4391cdac6f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:22:43.552509 kubelet[2729]: E0906 01:22:43.552384 2729 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d8c4b477ff7552087a7528ea03e1762470b021e831c99971fa8d2a4391cdac6f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d8c4b477ff7552087a7528ea03e1762470b021e831c99971fa8d2a4391cdac6f" Sep 6 01:22:43.552509 kubelet[2729]: E0906 01:22:43.552427 2729 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d8c4b477ff7552087a7528ea03e1762470b021e831c99971fa8d2a4391cdac6f"} Sep 6 01:22:43.552509 kubelet[2729]: E0906 01:22:43.552456 2729 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a84708b2-35a4-42bb-8dac-86ea0f5ddee1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d8c4b477ff7552087a7528ea03e1762470b021e831c99971fa8d2a4391cdac6f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 6 01:22:43.552509 kubelet[2729]: E0906 01:22:43.552484 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a84708b2-35a4-42bb-8dac-86ea0f5ddee1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d8c4b477ff7552087a7528ea03e1762470b021e831c99971fa8d2a4391cdac6f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-5pdmz" podUID="a84708b2-35a4-42bb-8dac-86ea0f5ddee1" Sep 6 01:22:43.553037 env[1586]: time="2025-09-06T01:22:43.553003820Z" level=error msg="StopPodSandbox for \"fb040c74039d975a5b72acbb13b204bfc8ce6207f7a737662699734322391953\" failed" error="failed to destroy network for sandbox \"fb040c74039d975a5b72acbb13b204bfc8ce6207f7a737662699734322391953\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:22:43.553383 kubelet[2729]: E0906 01:22:43.553289 2729 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fb040c74039d975a5b72acbb13b204bfc8ce6207f7a737662699734322391953\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fb040c74039d975a5b72acbb13b204bfc8ce6207f7a737662699734322391953" Sep 6 01:22:43.553383 kubelet[2729]: E0906 01:22:43.553321 2729 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fb040c74039d975a5b72acbb13b204bfc8ce6207f7a737662699734322391953"} Sep 6 01:22:43.553383 kubelet[2729]: E0906 01:22:43.553344 2729 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d9edb798-4304-4b76-a60a-df0eaa0d87c0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fb040c74039d975a5b72acbb13b204bfc8ce6207f7a737662699734322391953\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 6 01:22:43.553383 kubelet[2729]: E0906 01:22:43.553360 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d9edb798-4304-4b76-a60a-df0eaa0d87c0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fb040c74039d975a5b72acbb13b204bfc8ce6207f7a737662699734322391953\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-r5mms" podUID="d9edb798-4304-4b76-a60a-df0eaa0d87c0" Sep 6 01:22:43.558570 env[1586]: time="2025-09-06T01:22:43.558514704Z" level=error msg="StopPodSandbox for \"8d640dc67cd9dc8e43167eab759ff83b7ac1cbdb35781beeef948e46087dd270\" failed" error="failed to destroy network for sandbox \"8d640dc67cd9dc8e43167eab759ff83b7ac1cbdb35781beeef948e46087dd270\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:22:43.558738 kubelet[2729]: E0906 01:22:43.558683 2729 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8d640dc67cd9dc8e43167eab759ff83b7ac1cbdb35781beeef948e46087dd270\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8d640dc67cd9dc8e43167eab759ff83b7ac1cbdb35781beeef948e46087dd270" Sep 6 01:22:43.558799 kubelet[2729]: E0906 01:22:43.558743 2729 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8d640dc67cd9dc8e43167eab759ff83b7ac1cbdb35781beeef948e46087dd270"} Sep 6 01:22:43.558799 kubelet[2729]: E0906 01:22:43.558770 2729 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f5d38221-ee04-42c6-b763-9c6f6f204114\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8d640dc67cd9dc8e43167eab759ff83b7ac1cbdb35781beeef948e46087dd270\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 6 01:22:43.558872 kubelet[2729]: E0906 01:22:43.558803 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f5d38221-ee04-42c6-b763-9c6f6f204114\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8d640dc67cd9dc8e43167eab759ff83b7ac1cbdb35781beeef948e46087dd270\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-56cb94fc6-8w5rf" podUID="f5d38221-ee04-42c6-b763-9c6f6f204114" Sep 6 01:22:43.585878 env[1586]: time="2025-09-06T01:22:43.585816961Z" level=error msg="StopPodSandbox for \"ae2ef6667ff1b4ef29467318ee88ac32ab3d404e005c1556f2035af5a33aad64\" failed" error="failed to destroy network for sandbox \"ae2ef6667ff1b4ef29467318ee88ac32ab3d404e005c1556f2035af5a33aad64\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:22:43.586280 kubelet[2729]: E0906 01:22:43.586134 2729 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ae2ef6667ff1b4ef29467318ee88ac32ab3d404e005c1556f2035af5a33aad64\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ae2ef6667ff1b4ef29467318ee88ac32ab3d404e005c1556f2035af5a33aad64" Sep 6 01:22:43.586280 kubelet[2729]: E0906 01:22:43.586181 2729 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ae2ef6667ff1b4ef29467318ee88ac32ab3d404e005c1556f2035af5a33aad64"} Sep 6 01:22:43.586280 kubelet[2729]: E0906 01:22:43.586212 2729 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"75a2fc54-827a-4281-a019-abda9f06779c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ae2ef6667ff1b4ef29467318ee88ac32ab3d404e005c1556f2035af5a33aad64\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 6 01:22:43.586280 kubelet[2729]: E0906 01:22:43.586232 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"75a2fc54-827a-4281-a019-abda9f06779c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ae2ef6667ff1b4ef29467318ee88ac32ab3d404e005c1556f2035af5a33aad64\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-56cb94fc6-989l7" podUID="75a2fc54-827a-4281-a019-abda9f06779c" Sep 6 01:22:43.587398 env[1586]: time="2025-09-06T01:22:43.587351202Z" level=error msg="StopPodSandbox for \"23fead3f824bc1f5ea71579e672e205b84cb4ec276e9bd218bd80bfaea886227\" failed" error="failed to destroy network for sandbox \"23fead3f824bc1f5ea71579e672e205b84cb4ec276e9bd218bd80bfaea886227\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:22:43.587589 kubelet[2729]: E0906 01:22:43.587555 2729 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"23fead3f824bc1f5ea71579e672e205b84cb4ec276e9bd218bd80bfaea886227\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="23fead3f824bc1f5ea71579e672e205b84cb4ec276e9bd218bd80bfaea886227" Sep 6 01:22:43.587643 kubelet[2729]: E0906 01:22:43.587598 2729 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"23fead3f824bc1f5ea71579e672e205b84cb4ec276e9bd218bd80bfaea886227"} Sep 6 01:22:43.587673 kubelet[2729]: E0906 01:22:43.587637 2729 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6be53ee4-d329-4ede-8f50-6b8eeba7191c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"23fead3f824bc1f5ea71579e672e205b84cb4ec276e9bd218bd80bfaea886227\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 6 01:22:43.587673 kubelet[2729]: E0906 01:22:43.587662 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6be53ee4-d329-4ede-8f50-6b8eeba7191c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"23fead3f824bc1f5ea71579e672e205b84cb4ec276e9bd218bd80bfaea886227\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-nn22p" podUID="6be53ee4-d329-4ede-8f50-6b8eeba7191c" Sep 6 01:22:43.593197 env[1586]: time="2025-09-06T01:22:43.593148285Z" level=error msg="StopPodSandbox for \"f7907bc0909e4c74f245d687f3dff3e8ae6fb2038ccd21518dfaab8b7a11cbbd\" failed" error="failed to destroy network for sandbox \"f7907bc0909e4c74f245d687f3dff3e8ae6fb2038ccd21518dfaab8b7a11cbbd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:22:43.593523 kubelet[2729]: E0906 01:22:43.593410 2729 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f7907bc0909e4c74f245d687f3dff3e8ae6fb2038ccd21518dfaab8b7a11cbbd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f7907bc0909e4c74f245d687f3dff3e8ae6fb2038ccd21518dfaab8b7a11cbbd" Sep 6 01:22:43.593523 kubelet[2729]: E0906 01:22:43.593452 2729 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f7907bc0909e4c74f245d687f3dff3e8ae6fb2038ccd21518dfaab8b7a11cbbd"} Sep 6 01:22:43.593523 kubelet[2729]: E0906 01:22:43.593481 2729 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f4c05567-6639-4b0f-94fa-e71542024f21\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f7907bc0909e4c74f245d687f3dff3e8ae6fb2038ccd21518dfaab8b7a11cbbd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 6 01:22:43.593523 kubelet[2729]: E0906 01:22:43.593499 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f4c05567-6639-4b0f-94fa-e71542024f21\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f7907bc0909e4c74f245d687f3dff3e8ae6fb2038ccd21518dfaab8b7a11cbbd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-9fd44b5cd-ggmhh" podUID="f4c05567-6639-4b0f-94fa-e71542024f21" Sep 6 01:22:48.164865 kubelet[2729]: I0906 01:22:48.164672 2729 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 6 01:22:48.239626 kernel: kauditd_printk_skb: 8 callbacks suppressed Sep 6 01:22:48.239758 kernel: audit: type=1325 audit(1757121768.228:314): table=filter:102 family=2 entries=21 op=nft_register_rule pid=3826 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:22:48.228000 audit[3826]: NETFILTER_CFG table=filter:102 family=2 entries=21 op=nft_register_rule pid=3826 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:22:48.228000 audit[3826]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffdc9cf750 a2=0 a3=1 items=0 ppid=2834 pid=3826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:48.292356 kernel: audit: type=1300 audit(1757121768.228:314): arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffdc9cf750 a2=0 a3=1 items=0 ppid=2834 pid=3826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:48.228000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:22:48.310203 kernel: audit: type=1327 audit(1757121768.228:314): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:22:48.292000 audit[3826]: NETFILTER_CFG table=nat:103 family=2 entries=19 op=nft_register_chain pid=3826 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:22:48.327632 kernel: audit: type=1325 audit(1757121768.292:315): table=nat:103 family=2 entries=19 op=nft_register_chain pid=3826 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:22:48.292000 audit[3826]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffdc9cf750 a2=0 a3=1 items=0 ppid=2834 pid=3826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:48.362752 kernel: audit: type=1300 audit(1757121768.292:315): arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffdc9cf750 a2=0 a3=1 items=0 ppid=2834 pid=3826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:48.292000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:22:48.381911 kernel: audit: type=1327 audit(1757121768.292:315): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:22:48.631217 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3420674450.mount: Deactivated successfully. Sep 6 01:22:48.991542 env[1586]: time="2025-09-06T01:22:48.989836844Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:22:48.999496 env[1586]: time="2025-09-06T01:22:48.999463049Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:22:49.005449 env[1586]: time="2025-09-06T01:22:49.005424053Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:22:49.013639 env[1586]: time="2025-09-06T01:22:49.013614457Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:22:49.014651 env[1586]: time="2025-09-06T01:22:49.014231818Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 6 01:22:49.029519 env[1586]: time="2025-09-06T01:22:49.029485107Z" level=info msg="CreateContainer within sandbox \"ea6b2c916ce86a429412c8e207cad746b650d3fd0f9ea5feca075d785602cc7f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 6 01:22:49.075346 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount275834657.mount: Deactivated successfully. Sep 6 01:22:49.094701 env[1586]: time="2025-09-06T01:22:49.094655224Z" level=info msg="CreateContainer within sandbox \"ea6b2c916ce86a429412c8e207cad746b650d3fd0f9ea5feca075d785602cc7f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"31c28442c4258c80440ede4c427d6dcda161e5b05d2e882f5c1546afc6c909cb\"" Sep 6 01:22:49.095377 env[1586]: time="2025-09-06T01:22:49.095348424Z" level=info msg="StartContainer for \"31c28442c4258c80440ede4c427d6dcda161e5b05d2e882f5c1546afc6c909cb\"" Sep 6 01:22:49.151284 env[1586]: time="2025-09-06T01:22:49.151227857Z" level=info msg="StartContainer for \"31c28442c4258c80440ede4c427d6dcda161e5b05d2e882f5c1546afc6c909cb\" returns successfully" Sep 6 01:22:49.340664 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 6 01:22:49.340790 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 6 01:22:49.476038 env[1586]: time="2025-09-06T01:22:49.476000244Z" level=info msg="StopPodSandbox for \"645064afc4d385ac4cb23154566f4dc71bbe72ec4a523283a85dc30873d7bab8\"" Sep 6 01:22:49.525874 kubelet[2729]: I0906 01:22:49.525572 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-2xsxz" podStartSLOduration=1.00872253 podStartE2EDuration="17.525553152s" podCreationTimestamp="2025-09-06 01:22:32 +0000 UTC" firstStartedPulling="2025-09-06 01:22:32.498477356 +0000 UTC m=+22.386581566" lastFinishedPulling="2025-09-06 01:22:49.015307978 +0000 UTC m=+38.903412188" observedRunningTime="2025-09-06 01:22:49.525546032 +0000 UTC m=+39.413650242" watchObservedRunningTime="2025-09-06 01:22:49.525553152 +0000 UTC m=+39.413657362" Sep 6 01:22:49.629391 env[1586]: 2025-09-06 01:22:49.582 [INFO][3891] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="645064afc4d385ac4cb23154566f4dc71bbe72ec4a523283a85dc30873d7bab8" Sep 6 01:22:49.629391 env[1586]: 2025-09-06 01:22:49.582 [INFO][3891] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="645064afc4d385ac4cb23154566f4dc71bbe72ec4a523283a85dc30873d7bab8" iface="eth0" netns="/var/run/netns/cni-a35e4ab5-8dd2-af4d-1e41-6150a4d52a2d" Sep 6 01:22:49.629391 env[1586]: 2025-09-06 01:22:49.582 [INFO][3891] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="645064afc4d385ac4cb23154566f4dc71bbe72ec4a523283a85dc30873d7bab8" iface="eth0" netns="/var/run/netns/cni-a35e4ab5-8dd2-af4d-1e41-6150a4d52a2d" Sep 6 01:22:49.629391 env[1586]: 2025-09-06 01:22:49.582 [INFO][3891] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="645064afc4d385ac4cb23154566f4dc71bbe72ec4a523283a85dc30873d7bab8" iface="eth0" netns="/var/run/netns/cni-a35e4ab5-8dd2-af4d-1e41-6150a4d52a2d" Sep 6 01:22:49.629391 env[1586]: 2025-09-06 01:22:49.582 [INFO][3891] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="645064afc4d385ac4cb23154566f4dc71bbe72ec4a523283a85dc30873d7bab8" Sep 6 01:22:49.629391 env[1586]: 2025-09-06 01:22:49.582 [INFO][3891] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="645064afc4d385ac4cb23154566f4dc71bbe72ec4a523283a85dc30873d7bab8" Sep 6 01:22:49.629391 env[1586]: 2025-09-06 01:22:49.614 [INFO][3899] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="645064afc4d385ac4cb23154566f4dc71bbe72ec4a523283a85dc30873d7bab8" HandleID="k8s-pod-network.645064afc4d385ac4cb23154566f4dc71bbe72ec4a523283a85dc30873d7bab8" Workload="ci--3510.3.8--n--34c19deec5-k8s-whisker--d784c57d5--vskf8-eth0" Sep 6 01:22:49.629391 env[1586]: 2025-09-06 01:22:49.614 [INFO][3899] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 01:22:49.629391 env[1586]: 2025-09-06 01:22:49.614 [INFO][3899] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 01:22:49.629391 env[1586]: 2025-09-06 01:22:49.624 [WARNING][3899] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="645064afc4d385ac4cb23154566f4dc71bbe72ec4a523283a85dc30873d7bab8" HandleID="k8s-pod-network.645064afc4d385ac4cb23154566f4dc71bbe72ec4a523283a85dc30873d7bab8" Workload="ci--3510.3.8--n--34c19deec5-k8s-whisker--d784c57d5--vskf8-eth0" Sep 6 01:22:49.629391 env[1586]: 2025-09-06 01:22:49.624 [INFO][3899] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="645064afc4d385ac4cb23154566f4dc71bbe72ec4a523283a85dc30873d7bab8" HandleID="k8s-pod-network.645064afc4d385ac4cb23154566f4dc71bbe72ec4a523283a85dc30873d7bab8" Workload="ci--3510.3.8--n--34c19deec5-k8s-whisker--d784c57d5--vskf8-eth0" Sep 6 01:22:49.629391 env[1586]: 2025-09-06 01:22:49.625 [INFO][3899] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 01:22:49.629391 env[1586]: 2025-09-06 01:22:49.627 [INFO][3891] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="645064afc4d385ac4cb23154566f4dc71bbe72ec4a523283a85dc30873d7bab8" Sep 6 01:22:49.631131 env[1586]: time="2025-09-06T01:22:49.631086293Z" level=info msg="TearDown network for sandbox \"645064afc4d385ac4cb23154566f4dc71bbe72ec4a523283a85dc30873d7bab8\" successfully" Sep 6 01:22:49.631322 env[1586]: time="2025-09-06T01:22:49.631228053Z" level=info msg="StopPodSandbox for \"645064afc4d385ac4cb23154566f4dc71bbe72ec4a523283a85dc30873d7bab8\" returns successfully" Sep 6 01:22:49.634017 systemd[1]: run-netns-cni\x2da35e4ab5\x2d8dd2\x2daf4d\x2d1e41\x2d6150a4d52a2d.mount: Deactivated successfully. Sep 6 01:22:49.685049 kubelet[2729]: I0906 01:22:49.684318 2729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9c7f8a4f-c635-4c57-9d1e-e96d0df387cd-whisker-backend-key-pair\") pod \"9c7f8a4f-c635-4c57-9d1e-e96d0df387cd\" (UID: \"9c7f8a4f-c635-4c57-9d1e-e96d0df387cd\") " Sep 6 01:22:49.685049 kubelet[2729]: I0906 01:22:49.684377 2729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8cltk\" (UniqueName: \"kubernetes.io/projected/9c7f8a4f-c635-4c57-9d1e-e96d0df387cd-kube-api-access-8cltk\") pod \"9c7f8a4f-c635-4c57-9d1e-e96d0df387cd\" (UID: \"9c7f8a4f-c635-4c57-9d1e-e96d0df387cd\") " Sep 6 01:22:49.685049 kubelet[2729]: I0906 01:22:49.684400 2729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c7f8a4f-c635-4c57-9d1e-e96d0df387cd-whisker-ca-bundle\") pod \"9c7f8a4f-c635-4c57-9d1e-e96d0df387cd\" (UID: \"9c7f8a4f-c635-4c57-9d1e-e96d0df387cd\") " Sep 6 01:22:49.685049 kubelet[2729]: I0906 01:22:49.684807 2729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c7f8a4f-c635-4c57-9d1e-e96d0df387cd-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "9c7f8a4f-c635-4c57-9d1e-e96d0df387cd" (UID: "9c7f8a4f-c635-4c57-9d1e-e96d0df387cd"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 6 01:22:49.689501 systemd[1]: var-lib-kubelet-pods-9c7f8a4f\x2dc635\x2d4c57\x2d9d1e\x2de96d0df387cd-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 6 01:22:49.692107 kubelet[2729]: I0906 01:22:49.692077 2729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c7f8a4f-c635-4c57-9d1e-e96d0df387cd-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "9c7f8a4f-c635-4c57-9d1e-e96d0df387cd" (UID: "9c7f8a4f-c635-4c57-9d1e-e96d0df387cd"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 6 01:22:49.695586 systemd[1]: var-lib-kubelet-pods-9c7f8a4f\x2dc635\x2d4c57\x2d9d1e\x2de96d0df387cd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8cltk.mount: Deactivated successfully. Sep 6 01:22:49.697552 kubelet[2729]: I0906 01:22:49.697379 2729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c7f8a4f-c635-4c57-9d1e-e96d0df387cd-kube-api-access-8cltk" (OuterVolumeSpecName: "kube-api-access-8cltk") pod "9c7f8a4f-c635-4c57-9d1e-e96d0df387cd" (UID: "9c7f8a4f-c635-4c57-9d1e-e96d0df387cd"). InnerVolumeSpecName "kube-api-access-8cltk". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 01:22:49.785694 kubelet[2729]: I0906 01:22:49.785566 2729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8cltk\" (UniqueName: \"kubernetes.io/projected/9c7f8a4f-c635-4c57-9d1e-e96d0df387cd-kube-api-access-8cltk\") on node \"ci-3510.3.8-n-34c19deec5\" DevicePath \"\"" Sep 6 01:22:49.785694 kubelet[2729]: I0906 01:22:49.785604 2729 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c7f8a4f-c635-4c57-9d1e-e96d0df387cd-whisker-ca-bundle\") on node \"ci-3510.3.8-n-34c19deec5\" DevicePath \"\"" Sep 6 01:22:49.785694 kubelet[2729]: I0906 01:22:49.785616 2729 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9c7f8a4f-c635-4c57-9d1e-e96d0df387cd-whisker-backend-key-pair\") on node \"ci-3510.3.8-n-34c19deec5\" DevicePath \"\"" Sep 6 01:22:50.694054 kubelet[2729]: I0906 01:22:50.694017 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c256def3-ba5a-403f-bcc5-ad56c6214ce0-whisker-backend-key-pair\") pod \"whisker-6b9bf7db4-hpdlz\" (UID: \"c256def3-ba5a-403f-bcc5-ad56c6214ce0\") " pod="calico-system/whisker-6b9bf7db4-hpdlz" Sep 6 01:22:50.694582 kubelet[2729]: I0906 01:22:50.694564 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz8wg\" (UniqueName: \"kubernetes.io/projected/c256def3-ba5a-403f-bcc5-ad56c6214ce0-kube-api-access-dz8wg\") pod \"whisker-6b9bf7db4-hpdlz\" (UID: \"c256def3-ba5a-403f-bcc5-ad56c6214ce0\") " pod="calico-system/whisker-6b9bf7db4-hpdlz" Sep 6 01:22:50.694698 kubelet[2729]: I0906 01:22:50.694684 2729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c256def3-ba5a-403f-bcc5-ad56c6214ce0-whisker-ca-bundle\") pod \"whisker-6b9bf7db4-hpdlz\" (UID: \"c256def3-ba5a-403f-bcc5-ad56c6214ce0\") " pod="calico-system/whisker-6b9bf7db4-hpdlz" Sep 6 01:22:50.842000 audit[3952]: AVC avc: denied { write } for pid=3952 comm="tee" name="fd" dev="proc" ino=24473 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 6 01:22:50.842000 audit[3952]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffd12bd7c7 a2=241 a3=1b6 items=1 ppid=3929 pid=3952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:50.882347 env[1586]: time="2025-09-06T01:22:50.881902807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6b9bf7db4-hpdlz,Uid:c256def3-ba5a-403f-bcc5-ad56c6214ce0,Namespace:calico-system,Attempt:0,}" Sep 6 01:22:50.894955 kernel: audit: type=1400 audit(1757121770.842:316): avc: denied { write } for pid=3952 comm="tee" name="fd" dev="proc" ino=24473 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 6 01:22:50.895091 kernel: audit: type=1300 audit(1757121770.842:316): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffd12bd7c7 a2=241 a3=1b6 items=1 ppid=3929 pid=3952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:50.842000 audit: CWD cwd="/etc/service/enabled/confd/log" Sep 6 01:22:50.933693 kernel: audit: type=1307 audit(1757121770.842:316): cwd="/etc/service/enabled/confd/log" Sep 6 01:22:50.842000 audit: PATH item=0 name="/dev/fd/63" inode=24943 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:22:50.956835 kernel: audit: type=1302 audit(1757121770.842:316): item=0 name="/dev/fd/63" inode=24943 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:22:50.842000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 6 01:22:50.890000 audit[3972]: AVC avc: denied { write } for pid=3972 comm="tee" name="fd" dev="proc" ino=24507 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 6 01:22:50.890000 audit[3972]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffdf8817b7 a2=241 a3=1b6 items=1 ppid=3938 pid=3972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:50.890000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Sep 6 01:22:50.890000 audit: PATH item=0 name="/dev/fd/63" inode=24488 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:22:50.890000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 6 01:22:50.892000 audit[3975]: AVC avc: denied { write } for pid=3975 comm="tee" name="fd" dev="proc" ino=24511 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 6 01:22:50.892000 audit[3975]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffd927e7b8 a2=241 a3=1b6 items=1 ppid=3944 pid=3975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:50.892000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Sep 6 01:22:50.892000 audit: PATH item=0 name="/dev/fd/63" inode=24500 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:22:50.892000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 6 01:22:50.913000 audit[3987]: AVC avc: denied { write } for pid=3987 comm="tee" name="fd" dev="proc" ino=24527 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 6 01:22:50.913000 audit[3987]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff0c147c7 a2=241 a3=1b6 items=1 ppid=3928 pid=3987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:50.913000 audit: CWD cwd="/etc/service/enabled/bird6/log" Sep 6 01:22:50.913000 audit: PATH item=0 name="/dev/fd/63" inode=24519 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:22:50.913000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 6 01:22:50.920000 audit[3990]: AVC avc: denied { write } for pid=3990 comm="tee" name="fd" dev="proc" ino=24535 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 6 01:22:50.920000 audit[3990]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffc08df7c8 a2=241 a3=1b6 items=1 ppid=3933 pid=3990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:50.920000 audit: CWD cwd="/etc/service/enabled/bird/log" Sep 6 01:22:50.920000 audit: PATH item=0 name="/dev/fd/63" inode=24520 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:22:50.920000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 6 01:22:50.952000 audit[4002]: AVC avc: denied { write } for pid=4002 comm="tee" name="fd" dev="proc" ino=24961 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 6 01:22:50.952000 audit[4002]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff6d107c7 a2=241 a3=1b6 items=1 ppid=3941 pid=4002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:50.952000 audit: CWD cwd="/etc/service/enabled/felix/log" Sep 6 01:22:50.952000 audit: PATH item=0 name="/dev/fd/63" inode=24537 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:22:50.952000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 6 01:22:50.984000 audit[4000]: AVC avc: denied { write } for pid=4000 comm="tee" name="fd" dev="proc" ino=24965 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 6 01:22:50.984000 audit[4000]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffe43727c9 a2=241 a3=1b6 items=1 ppid=3936 pid=4000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:50.984000 audit: CWD cwd="/etc/service/enabled/cni/log" Sep 6 01:22:50.984000 audit: PATH item=0 name="/dev/fd/63" inode=24958 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:22:50.984000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 6 01:22:51.178645 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calidd6e5eee6c2: link becomes ready Sep 6 01:22:51.183098 systemd-networkd[1753]: calidd6e5eee6c2: Link UP Sep 6 01:22:51.183304 systemd-networkd[1753]: calidd6e5eee6c2: Gained carrier Sep 6 01:22:51.206101 env[1586]: 2025-09-06 01:22:51.023 [INFO][4005] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 6 01:22:51.206101 env[1586]: 2025-09-06 01:22:51.051 [INFO][4005] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--34c19deec5-k8s-whisker--6b9bf7db4--hpdlz-eth0 whisker-6b9bf7db4- calico-system c256def3-ba5a-403f-bcc5-ad56c6214ce0 915 0 2025-09-06 01:22:50 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6b9bf7db4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-3510.3.8-n-34c19deec5 whisker-6b9bf7db4-hpdlz eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calidd6e5eee6c2 [] [] }} ContainerID="4374b597f3b3215294d293756b65a73a0246d0e7f3154e3661f6945525190093" Namespace="calico-system" Pod="whisker-6b9bf7db4-hpdlz" WorkloadEndpoint="ci--3510.3.8--n--34c19deec5-k8s-whisker--6b9bf7db4--hpdlz-" Sep 6 01:22:51.206101 env[1586]: 2025-09-06 01:22:51.051 [INFO][4005] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4374b597f3b3215294d293756b65a73a0246d0e7f3154e3661f6945525190093" Namespace="calico-system" Pod="whisker-6b9bf7db4-hpdlz" WorkloadEndpoint="ci--3510.3.8--n--34c19deec5-k8s-whisker--6b9bf7db4--hpdlz-eth0" Sep 6 01:22:51.206101 env[1586]: 2025-09-06 01:22:51.087 [INFO][4019] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4374b597f3b3215294d293756b65a73a0246d0e7f3154e3661f6945525190093" HandleID="k8s-pod-network.4374b597f3b3215294d293756b65a73a0246d0e7f3154e3661f6945525190093" Workload="ci--3510.3.8--n--34c19deec5-k8s-whisker--6b9bf7db4--hpdlz-eth0" Sep 6 01:22:51.206101 env[1586]: 2025-09-06 01:22:51.087 [INFO][4019] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4374b597f3b3215294d293756b65a73a0246d0e7f3154e3661f6945525190093" HandleID="k8s-pod-network.4374b597f3b3215294d293756b65a73a0246d0e7f3154e3661f6945525190093" Workload="ci--3510.3.8--n--34c19deec5-k8s-whisker--6b9bf7db4--hpdlz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cb020), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.8-n-34c19deec5", "pod":"whisker-6b9bf7db4-hpdlz", "timestamp":"2025-09-06 01:22:51.087598603 +0000 UTC"}, Hostname:"ci-3510.3.8-n-34c19deec5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 6 01:22:51.206101 env[1586]: 2025-09-06 01:22:51.087 [INFO][4019] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 01:22:51.206101 env[1586]: 2025-09-06 01:22:51.087 [INFO][4019] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 01:22:51.206101 env[1586]: 2025-09-06 01:22:51.088 [INFO][4019] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-34c19deec5' Sep 6 01:22:51.206101 env[1586]: 2025-09-06 01:22:51.098 [INFO][4019] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4374b597f3b3215294d293756b65a73a0246d0e7f3154e3661f6945525190093" host="ci-3510.3.8-n-34c19deec5" Sep 6 01:22:51.206101 env[1586]: 2025-09-06 01:22:51.107 [INFO][4019] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-34c19deec5" Sep 6 01:22:51.206101 env[1586]: 2025-09-06 01:22:51.120 [INFO][4019] ipam/ipam.go 511: Trying affinity for 192.168.61.64/26 host="ci-3510.3.8-n-34c19deec5" Sep 6 01:22:51.206101 env[1586]: 2025-09-06 01:22:51.122 [INFO][4019] ipam/ipam.go 158: Attempting to load block cidr=192.168.61.64/26 host="ci-3510.3.8-n-34c19deec5" Sep 6 01:22:51.206101 env[1586]: 2025-09-06 01:22:51.124 [INFO][4019] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.61.64/26 host="ci-3510.3.8-n-34c19deec5" Sep 6 01:22:51.206101 env[1586]: 2025-09-06 01:22:51.124 [INFO][4019] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.61.64/26 handle="k8s-pod-network.4374b597f3b3215294d293756b65a73a0246d0e7f3154e3661f6945525190093" host="ci-3510.3.8-n-34c19deec5" Sep 6 01:22:51.206101 env[1586]: 2025-09-06 01:22:51.126 [INFO][4019] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4374b597f3b3215294d293756b65a73a0246d0e7f3154e3661f6945525190093 Sep 6 01:22:51.206101 env[1586]: 2025-09-06 01:22:51.134 [INFO][4019] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.61.64/26 handle="k8s-pod-network.4374b597f3b3215294d293756b65a73a0246d0e7f3154e3661f6945525190093" host="ci-3510.3.8-n-34c19deec5" Sep 6 01:22:51.206101 env[1586]: 2025-09-06 01:22:51.139 [INFO][4019] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.61.65/26] block=192.168.61.64/26 handle="k8s-pod-network.4374b597f3b3215294d293756b65a73a0246d0e7f3154e3661f6945525190093" host="ci-3510.3.8-n-34c19deec5" Sep 6 01:22:51.206101 env[1586]: 2025-09-06 01:22:51.139 [INFO][4019] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.61.65/26] handle="k8s-pod-network.4374b597f3b3215294d293756b65a73a0246d0e7f3154e3661f6945525190093" host="ci-3510.3.8-n-34c19deec5" Sep 6 01:22:51.206101 env[1586]: 2025-09-06 01:22:51.139 [INFO][4019] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 01:22:51.206101 env[1586]: 2025-09-06 01:22:51.139 [INFO][4019] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.61.65/26] IPv6=[] ContainerID="4374b597f3b3215294d293756b65a73a0246d0e7f3154e3661f6945525190093" HandleID="k8s-pod-network.4374b597f3b3215294d293756b65a73a0246d0e7f3154e3661f6945525190093" Workload="ci--3510.3.8--n--34c19deec5-k8s-whisker--6b9bf7db4--hpdlz-eth0" Sep 6 01:22:51.206698 env[1586]: 2025-09-06 01:22:51.141 [INFO][4005] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4374b597f3b3215294d293756b65a73a0246d0e7f3154e3661f6945525190093" Namespace="calico-system" Pod="whisker-6b9bf7db4-hpdlz" WorkloadEndpoint="ci--3510.3.8--n--34c19deec5-k8s-whisker--6b9bf7db4--hpdlz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--34c19deec5-k8s-whisker--6b9bf7db4--hpdlz-eth0", GenerateName:"whisker-6b9bf7db4-", Namespace:"calico-system", SelfLink:"", UID:"c256def3-ba5a-403f-bcc5-ad56c6214ce0", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 1, 22, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6b9bf7db4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-34c19deec5", ContainerID:"", Pod:"whisker-6b9bf7db4-hpdlz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.61.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calidd6e5eee6c2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 01:22:51.206698 env[1586]: 2025-09-06 01:22:51.141 [INFO][4005] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.61.65/32] ContainerID="4374b597f3b3215294d293756b65a73a0246d0e7f3154e3661f6945525190093" Namespace="calico-system" Pod="whisker-6b9bf7db4-hpdlz" WorkloadEndpoint="ci--3510.3.8--n--34c19deec5-k8s-whisker--6b9bf7db4--hpdlz-eth0" Sep 6 01:22:51.206698 env[1586]: 2025-09-06 01:22:51.141 [INFO][4005] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidd6e5eee6c2 ContainerID="4374b597f3b3215294d293756b65a73a0246d0e7f3154e3661f6945525190093" Namespace="calico-system" Pod="whisker-6b9bf7db4-hpdlz" WorkloadEndpoint="ci--3510.3.8--n--34c19deec5-k8s-whisker--6b9bf7db4--hpdlz-eth0" Sep 6 01:22:51.206698 env[1586]: 2025-09-06 01:22:51.166 [INFO][4005] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4374b597f3b3215294d293756b65a73a0246d0e7f3154e3661f6945525190093" Namespace="calico-system" Pod="whisker-6b9bf7db4-hpdlz" WorkloadEndpoint="ci--3510.3.8--n--34c19deec5-k8s-whisker--6b9bf7db4--hpdlz-eth0" Sep 6 01:22:51.206698 env[1586]: 2025-09-06 01:22:51.166 [INFO][4005] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4374b597f3b3215294d293756b65a73a0246d0e7f3154e3661f6945525190093" Namespace="calico-system" Pod="whisker-6b9bf7db4-hpdlz" WorkloadEndpoint="ci--3510.3.8--n--34c19deec5-k8s-whisker--6b9bf7db4--hpdlz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--34c19deec5-k8s-whisker--6b9bf7db4--hpdlz-eth0", GenerateName:"whisker-6b9bf7db4-", Namespace:"calico-system", SelfLink:"", UID:"c256def3-ba5a-403f-bcc5-ad56c6214ce0", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 1, 22, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6b9bf7db4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-34c19deec5", ContainerID:"4374b597f3b3215294d293756b65a73a0246d0e7f3154e3661f6945525190093", Pod:"whisker-6b9bf7db4-hpdlz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.61.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calidd6e5eee6c2", MAC:"9e:7f:6b:e3:58:ce", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 01:22:51.206698 env[1586]: 2025-09-06 01:22:51.201 [INFO][4005] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4374b597f3b3215294d293756b65a73a0246d0e7f3154e3661f6945525190093" Namespace="calico-system" Pod="whisker-6b9bf7db4-hpdlz" WorkloadEndpoint="ci--3510.3.8--n--34c19deec5-k8s-whisker--6b9bf7db4--hpdlz-eth0" Sep 6 01:22:51.238571 env[1586]: time="2025-09-06T01:22:51.238432288Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:22:51.238694 env[1586]: time="2025-09-06T01:22:51.238481648Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:22:51.238694 env[1586]: time="2025-09-06T01:22:51.238504608Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:22:51.238694 env[1586]: time="2025-09-06T01:22:51.238643648Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4374b597f3b3215294d293756b65a73a0246d0e7f3154e3661f6945525190093 pid=4049 runtime=io.containerd.runc.v2 Sep 6 01:22:51.315380 env[1586]: time="2025-09-06T01:22:51.315327931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6b9bf7db4-hpdlz,Uid:c256def3-ba5a-403f-bcc5-ad56c6214ce0,Namespace:calico-system,Attempt:0,} returns sandbox id \"4374b597f3b3215294d293756b65a73a0246d0e7f3154e3661f6945525190093\"" Sep 6 01:22:51.317104 env[1586]: time="2025-09-06T01:22:51.317074852Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 6 01:22:51.324000 audit[4103]: AVC avc: denied { bpf } for pid=4103 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.324000 audit[4103]: AVC avc: denied { bpf } for pid=4103 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.324000 audit[4103]: AVC avc: denied { perfmon } for pid=4103 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.324000 audit[4103]: AVC avc: denied { perfmon } for pid=4103 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.324000 audit[4103]: AVC avc: denied { perfmon } for pid=4103 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.324000 audit[4103]: AVC avc: denied { perfmon } for pid=4103 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.324000 audit[4103]: AVC avc: denied { perfmon } for pid=4103 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.324000 audit[4103]: AVC avc: denied { bpf } for pid=4103 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.324000 audit[4103]: AVC avc: denied { bpf } for pid=4103 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.324000 audit: BPF prog-id=10 op=LOAD Sep 6 01:22:51.324000 audit[4103]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffeff8cf78 a2=98 a3=ffffeff8cf68 items=0 ppid=3942 pid=4103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.324000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 6 01:22:51.325000 audit: BPF prog-id=10 op=UNLOAD Sep 6 01:22:51.325000 audit[4103]: AVC avc: denied { bpf } for pid=4103 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.325000 audit[4103]: AVC avc: denied { bpf } for pid=4103 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.325000 audit[4103]: AVC avc: denied { perfmon } for pid=4103 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.325000 audit[4103]: AVC avc: denied { perfmon } for pid=4103 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.325000 audit[4103]: AVC avc: denied { perfmon } for pid=4103 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.325000 audit[4103]: AVC avc: denied { perfmon } for pid=4103 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.325000 audit[4103]: AVC avc: denied { perfmon } for pid=4103 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.325000 audit[4103]: AVC avc: denied { bpf } for pid=4103 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.325000 audit[4103]: AVC avc: denied { bpf } for pid=4103 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.325000 audit: BPF prog-id=11 op=LOAD Sep 6 01:22:51.325000 audit[4103]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffeff8ce28 a2=74 a3=95 items=0 ppid=3942 pid=4103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.325000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 6 01:22:51.326000 audit: BPF prog-id=11 op=UNLOAD Sep 6 01:22:51.326000 audit[4103]: AVC avc: denied { bpf } for pid=4103 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.326000 audit[4103]: AVC avc: denied { bpf } for pid=4103 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.326000 audit[4103]: AVC avc: denied { perfmon } for pid=4103 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.326000 audit[4103]: AVC avc: denied { perfmon } for pid=4103 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.326000 audit[4103]: AVC avc: denied { perfmon } for pid=4103 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.326000 audit[4103]: AVC avc: denied { perfmon } for pid=4103 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.326000 audit[4103]: AVC avc: denied { perfmon } for pid=4103 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.326000 audit[4103]: AVC avc: denied { bpf } for pid=4103 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.326000 audit[4103]: AVC avc: denied { bpf } for pid=4103 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.326000 audit: BPF prog-id=12 op=LOAD Sep 6 01:22:51.326000 audit[4103]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffeff8ce58 a2=40 a3=ffffeff8ce88 items=0 ppid=3942 pid=4103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.326000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 6 01:22:51.327000 audit: BPF prog-id=12 op=UNLOAD Sep 6 01:22:51.327000 audit[4103]: AVC avc: denied { perfmon } for pid=4103 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.327000 audit[4103]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=0 a1=ffffeff8cf70 a2=50 a3=0 items=0 ppid=3942 pid=4103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.327000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 6 01:22:51.328000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.328000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.328000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.328000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.328000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.328000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.328000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.328000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.328000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.328000 audit: BPF prog-id=13 op=LOAD Sep 6 01:22:51.328000 audit[4104]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc9b65d28 a2=98 a3=ffffc9b65d18 items=0 ppid=3942 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.328000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 01:22:51.329000 audit: BPF prog-id=13 op=UNLOAD Sep 6 01:22:51.329000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.329000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.329000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.329000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.329000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.329000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.329000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.329000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.329000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.329000 audit: BPF prog-id=14 op=LOAD Sep 6 01:22:51.329000 audit[4104]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc9b659b8 a2=74 a3=95 items=0 ppid=3942 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.329000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 01:22:51.329000 audit: BPF prog-id=14 op=UNLOAD Sep 6 01:22:51.329000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.329000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.329000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.329000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.329000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.329000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.329000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.329000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.329000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.329000 audit: BPF prog-id=15 op=LOAD Sep 6 01:22:51.329000 audit[4104]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc9b65a18 a2=94 a3=2 items=0 ppid=3942 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.329000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 01:22:51.329000 audit: BPF prog-id=15 op=UNLOAD Sep 6 01:22:51.428000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.428000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.428000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.428000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.428000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.428000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.428000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.428000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.428000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.428000 audit: BPF prog-id=16 op=LOAD Sep 6 01:22:51.428000 audit[4104]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc9b659d8 a2=40 a3=ffffc9b65a08 items=0 ppid=3942 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.428000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 01:22:51.428000 audit: BPF prog-id=16 op=UNLOAD Sep 6 01:22:51.428000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.428000 audit[4104]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=0 a1=ffffc9b65af0 a2=50 a3=0 items=0 ppid=3942 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.428000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 01:22:51.437000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.437000 audit[4104]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffc9b65a48 a2=28 a3=ffffc9b65b78 items=0 ppid=3942 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.437000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 01:22:51.437000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.437000 audit[4104]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc9b65a78 a2=28 a3=ffffc9b65ba8 items=0 ppid=3942 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.437000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 01:22:51.437000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.437000 audit[4104]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc9b65928 a2=28 a3=ffffc9b65a58 items=0 ppid=3942 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.437000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 01:22:51.437000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.437000 audit[4104]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffc9b65a98 a2=28 a3=ffffc9b65bc8 items=0 ppid=3942 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.437000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 01:22:51.437000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.437000 audit[4104]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffc9b65a78 a2=28 a3=ffffc9b65ba8 items=0 ppid=3942 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.437000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 01:22:51.437000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.437000 audit[4104]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffc9b65a68 a2=28 a3=ffffc9b65b98 items=0 ppid=3942 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.437000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 01:22:51.437000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.437000 audit[4104]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffc9b65a98 a2=28 a3=ffffc9b65bc8 items=0 ppid=3942 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.437000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 01:22:51.437000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.437000 audit[4104]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc9b65a78 a2=28 a3=ffffc9b65ba8 items=0 ppid=3942 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.437000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 01:22:51.437000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.437000 audit[4104]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc9b65a98 a2=28 a3=ffffc9b65bc8 items=0 ppid=3942 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.437000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 01:22:51.437000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.437000 audit[4104]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc9b65a68 a2=28 a3=ffffc9b65b98 items=0 ppid=3942 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.437000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 01:22:51.437000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.437000 audit[4104]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffc9b65ae8 a2=28 a3=ffffc9b65c28 items=0 ppid=3942 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.437000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 01:22:51.437000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.437000 audit[4104]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffc9b65820 a2=50 a3=0 items=0 ppid=3942 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.437000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 01:22:51.437000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.437000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.437000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.437000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.437000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.437000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.437000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.437000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.437000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.437000 audit: BPF prog-id=17 op=LOAD Sep 6 01:22:51.437000 audit[4104]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc9b65828 a2=94 a3=5 items=0 ppid=3942 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.437000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 01:22:51.437000 audit: BPF prog-id=17 op=UNLOAD Sep 6 01:22:51.437000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.437000 audit[4104]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffc9b65930 a2=50 a3=0 items=0 ppid=3942 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.437000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 01:22:51.437000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.437000 audit[4104]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=16 a1=ffffc9b65a78 a2=4 a3=3 items=0 ppid=3942 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.437000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 01:22:51.437000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.437000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.437000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.437000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.437000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.437000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.437000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.437000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.437000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.437000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.437000 audit[4104]: AVC avc: denied { confidentiality } for pid=4104 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 6 01:22:51.437000 audit[4104]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffc9b65a58 a2=94 a3=6 items=0 ppid=3942 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.437000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 01:22:51.438000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.438000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.438000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.438000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.438000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.438000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.438000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.438000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.438000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.438000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.438000 audit[4104]: AVC avc: denied { confidentiality } for pid=4104 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 6 01:22:51.438000 audit[4104]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffc9b65228 a2=94 a3=83 items=0 ppid=3942 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.438000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 01:22:51.438000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.438000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.438000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.438000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.438000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.438000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.438000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.438000 audit[4104]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffc9b65228 a2=94 a3=83 items=0 ppid=3942 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.438000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 01:22:51.446000 audit[4107]: AVC avc: denied { bpf } for pid=4107 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.446000 audit[4107]: AVC avc: denied { bpf } for pid=4107 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.446000 audit[4107]: AVC avc: denied { perfmon } for pid=4107 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.446000 audit[4107]: AVC avc: denied { perfmon } for pid=4107 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.446000 audit[4107]: AVC avc: denied { perfmon } for pid=4107 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.446000 audit[4107]: AVC avc: denied { perfmon } for pid=4107 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.446000 audit[4107]: AVC avc: denied { perfmon } for pid=4107 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.446000 audit[4107]: AVC avc: denied { bpf } for pid=4107 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.446000 audit[4107]: AVC avc: denied { bpf } for pid=4107 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.446000 audit: BPF prog-id=18 op=LOAD Sep 6 01:22:51.446000 audit[4107]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe900d378 a2=98 a3=ffffe900d368 items=0 ppid=3942 pid=4107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.446000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Sep 6 01:22:51.446000 audit: BPF prog-id=18 op=UNLOAD Sep 6 01:22:51.446000 audit[4107]: AVC avc: denied { bpf } for pid=4107 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.446000 audit[4107]: AVC avc: denied { bpf } for pid=4107 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.446000 audit[4107]: AVC avc: denied { perfmon } for pid=4107 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.446000 audit[4107]: AVC avc: denied { perfmon } for pid=4107 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.446000 audit[4107]: AVC avc: denied { perfmon } for pid=4107 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.446000 audit[4107]: AVC avc: denied { perfmon } for pid=4107 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.446000 audit[4107]: AVC avc: denied { perfmon } for pid=4107 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.446000 audit[4107]: AVC avc: denied { bpf } for pid=4107 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.446000 audit[4107]: AVC avc: denied { bpf } for pid=4107 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.446000 audit: BPF prog-id=19 op=LOAD Sep 6 01:22:51.446000 audit[4107]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe900d228 a2=74 a3=95 items=0 ppid=3942 pid=4107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.446000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Sep 6 01:22:51.446000 audit: BPF prog-id=19 op=UNLOAD Sep 6 01:22:51.446000 audit[4107]: AVC avc: denied { bpf } for pid=4107 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.446000 audit[4107]: AVC avc: denied { bpf } for pid=4107 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.446000 audit[4107]: AVC avc: denied { perfmon } for pid=4107 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.446000 audit[4107]: AVC avc: denied { perfmon } for pid=4107 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.446000 audit[4107]: AVC avc: denied { perfmon } for pid=4107 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.446000 audit[4107]: AVC avc: denied { perfmon } for pid=4107 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.446000 audit[4107]: AVC avc: denied { perfmon } for pid=4107 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.446000 audit[4107]: AVC avc: denied { bpf } for pid=4107 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.446000 audit[4107]: AVC avc: denied { bpf } for pid=4107 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.446000 audit: BPF prog-id=20 op=LOAD Sep 6 01:22:51.446000 audit[4107]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe900d258 a2=40 a3=ffffe900d288 items=0 ppid=3942 pid=4107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.446000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Sep 6 01:22:51.447000 audit: BPF prog-id=20 op=UNLOAD Sep 6 01:22:51.522698 systemd-networkd[1753]: vxlan.calico: Link UP Sep 6 01:22:51.522704 systemd-networkd[1753]: vxlan.calico: Gained carrier Sep 6 01:22:51.538000 audit[4134]: AVC avc: denied { bpf } for pid=4134 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.538000 audit[4134]: AVC avc: denied { bpf } for pid=4134 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.538000 audit[4134]: AVC avc: denied { perfmon } for pid=4134 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.538000 audit[4134]: AVC avc: denied { perfmon } for pid=4134 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.538000 audit[4134]: AVC avc: denied { perfmon } for pid=4134 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.538000 audit[4134]: AVC avc: denied { perfmon } for pid=4134 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.538000 audit[4134]: AVC avc: denied { perfmon } for pid=4134 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.538000 audit[4134]: AVC avc: denied { bpf } for pid=4134 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.538000 audit[4134]: AVC avc: denied { bpf } for pid=4134 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.538000 audit: BPF prog-id=21 op=LOAD Sep 6 01:22:51.538000 audit[4134]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffcf11ea58 a2=98 a3=ffffcf11ea48 items=0 ppid=3942 pid=4134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.538000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 01:22:51.538000 audit: BPF prog-id=21 op=UNLOAD Sep 6 01:22:51.538000 audit[4134]: AVC avc: denied { bpf } for pid=4134 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.538000 audit[4134]: AVC avc: denied { bpf } for pid=4134 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.538000 audit[4134]: AVC avc: denied { perfmon } for pid=4134 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.538000 audit[4134]: AVC avc: denied { perfmon } for pid=4134 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.538000 audit[4134]: AVC avc: denied { perfmon } for pid=4134 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.538000 audit[4134]: AVC avc: denied { perfmon } for pid=4134 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.538000 audit[4134]: AVC avc: denied { perfmon } for pid=4134 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.538000 audit[4134]: AVC avc: denied { bpf } for pid=4134 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.538000 audit[4134]: AVC avc: denied { bpf } for pid=4134 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.538000 audit: BPF prog-id=22 op=LOAD Sep 6 01:22:51.538000 audit[4134]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffcf11e738 a2=74 a3=95 items=0 ppid=3942 pid=4134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.538000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 01:22:51.538000 audit: BPF prog-id=22 op=UNLOAD Sep 6 01:22:51.538000 audit[4134]: AVC avc: denied { bpf } for pid=4134 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.538000 audit[4134]: AVC avc: denied { bpf } for pid=4134 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.538000 audit[4134]: AVC avc: denied { perfmon } for pid=4134 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.538000 audit[4134]: AVC avc: denied { perfmon } for pid=4134 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.538000 audit[4134]: AVC avc: denied { perfmon } for pid=4134 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.538000 audit[4134]: AVC avc: denied { perfmon } for pid=4134 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.538000 audit[4134]: AVC avc: denied { perfmon } for pid=4134 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.538000 audit[4134]: AVC avc: denied { bpf } for pid=4134 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.538000 audit[4134]: AVC avc: denied { bpf } for pid=4134 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.538000 audit: BPF prog-id=23 op=LOAD Sep 6 01:22:51.538000 audit[4134]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffcf11e798 a2=94 a3=2 items=0 ppid=3942 pid=4134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.538000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 01:22:51.538000 audit: BPF prog-id=23 op=UNLOAD Sep 6 01:22:51.538000 audit[4134]: AVC avc: denied { bpf } for pid=4134 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.538000 audit[4134]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffcf11e7c8 a2=28 a3=ffffcf11e8f8 items=0 ppid=3942 pid=4134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.538000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 01:22:51.538000 audit[4134]: AVC avc: denied { bpf } for pid=4134 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.538000 audit[4134]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcf11e7f8 a2=28 a3=ffffcf11e928 items=0 ppid=3942 pid=4134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.538000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 01:22:51.539000 audit[4134]: AVC avc: denied { bpf } for pid=4134 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.539000 audit[4134]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcf11e6a8 a2=28 a3=ffffcf11e7d8 items=0 ppid=3942 pid=4134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.539000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 01:22:51.539000 audit[4134]: AVC avc: denied { bpf } for pid=4134 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.539000 audit[4134]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffcf11e818 a2=28 a3=ffffcf11e948 items=0 ppid=3942 pid=4134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.539000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 01:22:51.539000 audit[4134]: AVC avc: denied { bpf } for pid=4134 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.539000 audit[4134]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffcf11e7f8 a2=28 a3=ffffcf11e928 items=0 ppid=3942 pid=4134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.539000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 01:22:51.539000 audit[4134]: AVC avc: denied { bpf } for pid=4134 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.539000 audit[4134]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffcf11e7e8 a2=28 a3=ffffcf11e918 items=0 ppid=3942 pid=4134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.539000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 01:22:51.539000 audit[4134]: AVC avc: denied { bpf } for pid=4134 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.539000 audit[4134]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffcf11e818 a2=28 a3=ffffcf11e948 items=0 ppid=3942 pid=4134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.539000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 01:22:51.539000 audit[4134]: AVC avc: denied { bpf } for pid=4134 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.539000 audit[4134]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcf11e7f8 a2=28 a3=ffffcf11e928 items=0 ppid=3942 pid=4134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.539000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 01:22:51.539000 audit[4134]: AVC avc: denied { bpf } for pid=4134 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.539000 audit[4134]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcf11e818 a2=28 a3=ffffcf11e948 items=0 ppid=3942 pid=4134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.539000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 01:22:51.539000 audit[4134]: AVC avc: denied { bpf } for pid=4134 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.539000 audit[4134]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcf11e7e8 a2=28 a3=ffffcf11e918 items=0 ppid=3942 pid=4134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.539000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 01:22:51.539000 audit[4134]: AVC avc: denied { bpf } for pid=4134 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.539000 audit[4134]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffcf11e868 a2=28 a3=ffffcf11e9a8 items=0 ppid=3942 pid=4134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.539000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 01:22:51.539000 audit[4134]: AVC avc: denied { bpf } for pid=4134 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.539000 audit[4134]: AVC avc: denied { bpf } for pid=4134 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.539000 audit[4134]: AVC avc: denied { perfmon } for pid=4134 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.539000 audit[4134]: AVC avc: denied { perfmon } for pid=4134 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.539000 audit[4134]: AVC avc: denied { perfmon } for pid=4134 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.539000 audit[4134]: AVC avc: denied { perfmon } for pid=4134 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.539000 audit[4134]: AVC avc: denied { perfmon } for pid=4134 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.539000 audit[4134]: AVC avc: denied { bpf } for pid=4134 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.539000 audit[4134]: AVC avc: denied { bpf } for pid=4134 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.539000 audit: BPF prog-id=24 op=LOAD Sep 6 01:22:51.539000 audit[4134]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffcf11e688 a2=40 a3=ffffcf11e6b8 items=0 ppid=3942 pid=4134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.539000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 01:22:51.539000 audit: BPF prog-id=24 op=UNLOAD Sep 6 01:22:51.539000 audit[4134]: AVC avc: denied { bpf } for pid=4134 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.539000 audit[4134]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=0 a1=ffffcf11e6b0 a2=50 a3=0 items=0 ppid=3942 pid=4134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.539000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 01:22:51.539000 audit[4134]: AVC avc: denied { bpf } for pid=4134 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.539000 audit[4134]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=0 a1=ffffcf11e6b0 a2=50 a3=0 items=0 ppid=3942 pid=4134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.539000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 01:22:51.539000 audit[4134]: AVC avc: denied { bpf } for pid=4134 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.539000 audit[4134]: AVC avc: denied { bpf } for pid=4134 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.539000 audit[4134]: AVC avc: denied { bpf } for pid=4134 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.539000 audit[4134]: AVC avc: denied { perfmon } for pid=4134 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.539000 audit[4134]: AVC avc: denied { perfmon } for pid=4134 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.539000 audit[4134]: AVC avc: denied { perfmon } for pid=4134 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.539000 audit[4134]: AVC avc: denied { perfmon } for pid=4134 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.539000 audit[4134]: AVC avc: denied { perfmon } for pid=4134 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.539000 audit[4134]: AVC avc: denied { bpf } for pid=4134 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.539000 audit[4134]: AVC avc: denied { bpf } for pid=4134 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.539000 audit: BPF prog-id=25 op=LOAD Sep 6 01:22:51.539000 audit[4134]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffcf11de18 a2=94 a3=2 items=0 ppid=3942 pid=4134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.539000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 01:22:51.540000 audit: BPF prog-id=25 op=UNLOAD Sep 6 01:22:51.540000 audit[4134]: AVC avc: denied { bpf } for pid=4134 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.540000 audit[4134]: AVC avc: denied { bpf } for pid=4134 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.540000 audit[4134]: AVC avc: denied { bpf } for pid=4134 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.540000 audit[4134]: AVC avc: denied { perfmon } for pid=4134 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.540000 audit[4134]: AVC avc: denied { perfmon } for pid=4134 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.540000 audit[4134]: AVC avc: denied { perfmon } for pid=4134 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.540000 audit[4134]: AVC avc: denied { perfmon } for pid=4134 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.540000 audit[4134]: AVC avc: denied { perfmon } for pid=4134 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.540000 audit[4134]: AVC avc: denied { bpf } for pid=4134 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.540000 audit[4134]: AVC avc: denied { bpf } for pid=4134 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.540000 audit: BPF prog-id=26 op=LOAD Sep 6 01:22:51.540000 audit[4134]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffcf11dfa8 a2=94 a3=30 items=0 ppid=3942 pid=4134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.540000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 01:22:51.545000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.545000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.545000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.545000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.545000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.545000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.545000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.545000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.545000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.545000 audit: BPF prog-id=27 op=LOAD Sep 6 01:22:51.545000 audit[4136]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffee123558 a2=98 a3=ffffee123548 items=0 ppid=3942 pid=4136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.545000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 01:22:51.545000 audit: BPF prog-id=27 op=UNLOAD Sep 6 01:22:51.545000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.545000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.545000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.545000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.545000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.545000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.545000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.545000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.545000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.545000 audit: BPF prog-id=28 op=LOAD Sep 6 01:22:51.545000 audit[4136]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffee1231e8 a2=74 a3=95 items=0 ppid=3942 pid=4136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.545000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 01:22:51.545000 audit: BPF prog-id=28 op=UNLOAD Sep 6 01:22:51.545000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.545000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.545000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.545000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.545000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.545000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.545000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.545000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.545000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.545000 audit: BPF prog-id=29 op=LOAD Sep 6 01:22:51.545000 audit[4136]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffee123248 a2=94 a3=2 items=0 ppid=3942 pid=4136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.545000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 01:22:51.545000 audit: BPF prog-id=29 op=UNLOAD Sep 6 01:22:51.650000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.650000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.650000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.650000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.650000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.650000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.650000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.650000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.650000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.650000 audit: BPF prog-id=30 op=LOAD Sep 6 01:22:51.650000 audit[4136]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffee123208 a2=40 a3=ffffee123238 items=0 ppid=3942 pid=4136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.650000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 01:22:51.650000 audit: BPF prog-id=30 op=UNLOAD Sep 6 01:22:51.650000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.650000 audit[4136]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=0 a1=ffffee123320 a2=50 a3=0 items=0 ppid=3942 pid=4136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.650000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 01:22:51.661000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.661000 audit[4136]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffee123278 a2=28 a3=ffffee1233a8 items=0 ppid=3942 pid=4136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.661000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 01:22:51.661000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.661000 audit[4136]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffee1232a8 a2=28 a3=ffffee1233d8 items=0 ppid=3942 pid=4136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.661000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 01:22:51.661000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.661000 audit[4136]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffee123158 a2=28 a3=ffffee123288 items=0 ppid=3942 pid=4136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.661000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 01:22:51.661000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.661000 audit[4136]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffee1232c8 a2=28 a3=ffffee1233f8 items=0 ppid=3942 pid=4136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.661000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 01:22:51.661000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.661000 audit[4136]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffee1232a8 a2=28 a3=ffffee1233d8 items=0 ppid=3942 pid=4136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.661000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 01:22:51.661000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.661000 audit[4136]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffee123298 a2=28 a3=ffffee1233c8 items=0 ppid=3942 pid=4136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.661000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 01:22:51.661000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.661000 audit[4136]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffee1232c8 a2=28 a3=ffffee1233f8 items=0 ppid=3942 pid=4136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.661000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 01:22:51.661000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.661000 audit[4136]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffee1232a8 a2=28 a3=ffffee1233d8 items=0 ppid=3942 pid=4136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.661000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 01:22:51.661000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.661000 audit[4136]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffee1232c8 a2=28 a3=ffffee1233f8 items=0 ppid=3942 pid=4136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.661000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 01:22:51.661000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.661000 audit[4136]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffee123298 a2=28 a3=ffffee1233c8 items=0 ppid=3942 pid=4136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.661000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 01:22:51.661000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.661000 audit[4136]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffee123318 a2=28 a3=ffffee123458 items=0 ppid=3942 pid=4136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.661000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 01:22:51.661000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.661000 audit[4136]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffee123050 a2=50 a3=0 items=0 ppid=3942 pid=4136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.661000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 01:22:51.661000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.661000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.661000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.661000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.661000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.661000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.661000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.661000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.661000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.661000 audit: BPF prog-id=31 op=LOAD Sep 6 01:22:51.661000 audit[4136]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffee123058 a2=94 a3=5 items=0 ppid=3942 pid=4136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.661000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 01:22:51.661000 audit: BPF prog-id=31 op=UNLOAD Sep 6 01:22:51.661000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.661000 audit[4136]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffee123160 a2=50 a3=0 items=0 ppid=3942 pid=4136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.661000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 01:22:51.661000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.661000 audit[4136]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=16 a1=ffffee1232a8 a2=4 a3=3 items=0 ppid=3942 pid=4136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.661000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 01:22:51.661000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.661000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.661000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.661000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.661000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.661000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.661000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.661000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.661000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.661000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.661000 audit[4136]: AVC avc: denied { confidentiality } for pid=4136 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 6 01:22:51.661000 audit[4136]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffee123288 a2=94 a3=6 items=0 ppid=3942 pid=4136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.661000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 01:22:51.662000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.662000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.662000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.662000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.662000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.662000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.662000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.662000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.662000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.662000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.662000 audit[4136]: AVC avc: denied { confidentiality } for pid=4136 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 6 01:22:51.662000 audit[4136]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffee122a58 a2=94 a3=83 items=0 ppid=3942 pid=4136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.662000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 01:22:51.662000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.662000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.662000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.662000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.662000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.662000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.662000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.662000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.662000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.662000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.662000 audit[4136]: AVC avc: denied { confidentiality } for pid=4136 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 6 01:22:51.662000 audit[4136]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffee122a58 a2=94 a3=83 items=0 ppid=3942 pid=4136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.662000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 01:22:51.662000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.662000 audit[4136]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffee124498 a2=10 a3=ffffee124588 items=0 ppid=3942 pid=4136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.662000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 01:22:51.662000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.662000 audit[4136]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffee124358 a2=10 a3=ffffee124448 items=0 ppid=3942 pid=4136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.662000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 01:22:51.662000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.662000 audit[4136]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffee1242c8 a2=10 a3=ffffee124448 items=0 ppid=3942 pid=4136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.662000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 01:22:51.662000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:22:51.662000 audit[4136]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffee1242c8 a2=10 a3=ffffee124448 items=0 ppid=3942 pid=4136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.662000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 01:22:51.667000 audit: BPF prog-id=26 op=UNLOAD Sep 6 01:22:51.774000 audit[4164]: NETFILTER_CFG table=mangle:104 family=2 entries=16 op=nft_register_chain pid=4164 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 6 01:22:51.774000 audit[4164]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffc81ea080 a2=0 a3=ffffbe465fa8 items=0 ppid=3942 pid=4164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.774000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 6 01:22:51.782000 audit[4163]: NETFILTER_CFG table=nat:105 family=2 entries=15 op=nft_register_chain pid=4163 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 6 01:22:51.782000 audit[4163]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffe81a5e10 a2=0 a3=ffff944bcfa8 items=0 ppid=3942 pid=4163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.782000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 6 01:22:51.828000 audit[4162]: NETFILTER_CFG table=raw:106 family=2 entries=21 op=nft_register_chain pid=4162 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 6 01:22:51.828000 audit[4162]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=ffffca9a2ba0 a2=0 a3=ffffb6e05fa8 items=0 ppid=3942 pid=4162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.828000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 6 01:22:51.847000 audit[4166]: NETFILTER_CFG table=filter:107 family=2 entries=94 op=nft_register_chain pid=4166 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 6 01:22:51.847000 audit[4166]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=53116 a0=3 a1=fffff3d3edd0 a2=0 a3=ffff9341cfa8 items=0 ppid=3942 pid=4166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:51.847000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 6 01:22:52.273279 kubelet[2729]: I0906 01:22:52.273225 2729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c7f8a4f-c635-4c57-9d1e-e96d0df387cd" path="/var/lib/kubelet/pods/9c7f8a4f-c635-4c57-9d1e-e96d0df387cd/volumes" Sep 6 01:22:52.689004 env[1586]: time="2025-09-06T01:22:52.688954739Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:22:52.703033 env[1586]: time="2025-09-06T01:22:52.702994786Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:22:52.709530 env[1586]: time="2025-09-06T01:22:52.709485590Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:22:52.717093 env[1586]: time="2025-09-06T01:22:52.717060234Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:22:52.717626 env[1586]: time="2025-09-06T01:22:52.717599275Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 6 01:22:52.720079 env[1586]: time="2025-09-06T01:22:52.720051356Z" level=info msg="CreateContainer within sandbox \"4374b597f3b3215294d293756b65a73a0246d0e7f3154e3661f6945525190093\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 6 01:22:52.763956 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4200156354.mount: Deactivated successfully. Sep 6 01:22:52.785056 env[1586]: time="2025-09-06T01:22:52.785013512Z" level=info msg="CreateContainer within sandbox \"4374b597f3b3215294d293756b65a73a0246d0e7f3154e3661f6945525190093\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"415e74955e2e85c60e76e6c046260360150381637d97ebb4d4c0e346ea3a013b\"" Sep 6 01:22:52.787253 env[1586]: time="2025-09-06T01:22:52.787216993Z" level=info msg="StartContainer for \"415e74955e2e85c60e76e6c046260360150381637d97ebb4d4c0e346ea3a013b\"" Sep 6 01:22:52.854321 env[1586]: time="2025-09-06T01:22:52.854235350Z" level=info msg="StartContainer for \"415e74955e2e85c60e76e6c046260360150381637d97ebb4d4c0e346ea3a013b\" returns successfully" Sep 6 01:22:52.856425 env[1586]: time="2025-09-06T01:22:52.855933471Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 6 01:22:53.029388 systemd-networkd[1753]: calidd6e5eee6c2: Gained IPv6LL Sep 6 01:22:53.413410 systemd-networkd[1753]: vxlan.calico: Gained IPv6LL Sep 6 01:22:54.271854 env[1586]: time="2025-09-06T01:22:54.271818208Z" level=info msg="StopPodSandbox for \"ae2ef6667ff1b4ef29467318ee88ac32ab3d404e005c1556f2035af5a33aad64\"" Sep 6 01:22:54.361029 env[1586]: 2025-09-06 01:22:54.327 [INFO][4227] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ae2ef6667ff1b4ef29467318ee88ac32ab3d404e005c1556f2035af5a33aad64" Sep 6 01:22:54.361029 env[1586]: 2025-09-06 01:22:54.327 [INFO][4227] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ae2ef6667ff1b4ef29467318ee88ac32ab3d404e005c1556f2035af5a33aad64" iface="eth0" netns="/var/run/netns/cni-60860b62-a5a5-8d08-450c-bd3bd4a87fac" Sep 6 01:22:54.361029 env[1586]: 2025-09-06 01:22:54.329 [INFO][4227] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ae2ef6667ff1b4ef29467318ee88ac32ab3d404e005c1556f2035af5a33aad64" iface="eth0" netns="/var/run/netns/cni-60860b62-a5a5-8d08-450c-bd3bd4a87fac" Sep 6 01:22:54.361029 env[1586]: 2025-09-06 01:22:54.329 [INFO][4227] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ae2ef6667ff1b4ef29467318ee88ac32ab3d404e005c1556f2035af5a33aad64" iface="eth0" netns="/var/run/netns/cni-60860b62-a5a5-8d08-450c-bd3bd4a87fac" Sep 6 01:22:54.361029 env[1586]: 2025-09-06 01:22:54.329 [INFO][4227] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ae2ef6667ff1b4ef29467318ee88ac32ab3d404e005c1556f2035af5a33aad64" Sep 6 01:22:54.361029 env[1586]: 2025-09-06 01:22:54.329 [INFO][4227] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ae2ef6667ff1b4ef29467318ee88ac32ab3d404e005c1556f2035af5a33aad64" Sep 6 01:22:54.361029 env[1586]: 2025-09-06 01:22:54.347 [INFO][4234] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ae2ef6667ff1b4ef29467318ee88ac32ab3d404e005c1556f2035af5a33aad64" HandleID="k8s-pod-network.ae2ef6667ff1b4ef29467318ee88ac32ab3d404e005c1556f2035af5a33aad64" Workload="ci--3510.3.8--n--34c19deec5-k8s-calico--apiserver--56cb94fc6--989l7-eth0" Sep 6 01:22:54.361029 env[1586]: 2025-09-06 01:22:54.347 [INFO][4234] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 01:22:54.361029 env[1586]: 2025-09-06 01:22:54.347 [INFO][4234] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 01:22:54.361029 env[1586]: 2025-09-06 01:22:54.355 [WARNING][4234] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ae2ef6667ff1b4ef29467318ee88ac32ab3d404e005c1556f2035af5a33aad64" HandleID="k8s-pod-network.ae2ef6667ff1b4ef29467318ee88ac32ab3d404e005c1556f2035af5a33aad64" Workload="ci--3510.3.8--n--34c19deec5-k8s-calico--apiserver--56cb94fc6--989l7-eth0" Sep 6 01:22:54.361029 env[1586]: 2025-09-06 01:22:54.355 [INFO][4234] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ae2ef6667ff1b4ef29467318ee88ac32ab3d404e005c1556f2035af5a33aad64" HandleID="k8s-pod-network.ae2ef6667ff1b4ef29467318ee88ac32ab3d404e005c1556f2035af5a33aad64" Workload="ci--3510.3.8--n--34c19deec5-k8s-calico--apiserver--56cb94fc6--989l7-eth0" Sep 6 01:22:54.361029 env[1586]: 2025-09-06 01:22:54.357 [INFO][4234] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 01:22:54.361029 env[1586]: 2025-09-06 01:22:54.359 [INFO][4227] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ae2ef6667ff1b4ef29467318ee88ac32ab3d404e005c1556f2035af5a33aad64" Sep 6 01:22:54.364808 env[1586]: time="2025-09-06T01:22:54.364742339Z" level=info msg="TearDown network for sandbox \"ae2ef6667ff1b4ef29467318ee88ac32ab3d404e005c1556f2035af5a33aad64\" successfully" Sep 6 01:22:54.364808 env[1586]: time="2025-09-06T01:22:54.364778659Z" level=info msg="StopPodSandbox for \"ae2ef6667ff1b4ef29467318ee88ac32ab3d404e005c1556f2035af5a33aad64\" returns successfully" Sep 6 01:22:54.363623 systemd[1]: run-netns-cni\x2d60860b62\x2da5a5\x2d8d08\x2d450c\x2dbd3bd4a87fac.mount: Deactivated successfully. Sep 6 01:22:54.365850 env[1586]: time="2025-09-06T01:22:54.365826779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56cb94fc6-989l7,Uid:75a2fc54-827a-4281-a019-abda9f06779c,Namespace:calico-apiserver,Attempt:1,}" Sep 6 01:22:54.632866 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 6 01:22:54.632981 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali2a7f55f8928: link becomes ready Sep 6 01:22:54.633605 systemd-networkd[1753]: cali2a7f55f8928: Link UP Sep 6 01:22:54.634357 systemd-networkd[1753]: cali2a7f55f8928: Gained carrier Sep 6 01:22:54.666016 env[1586]: 2025-09-06 01:22:54.538 [INFO][4241] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--34c19deec5-k8s-calico--apiserver--56cb94fc6--989l7-eth0 calico-apiserver-56cb94fc6- calico-apiserver 75a2fc54-827a-4281-a019-abda9f06779c 934 0 2025-09-06 01:22:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:56cb94fc6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510.3.8-n-34c19deec5 calico-apiserver-56cb94fc6-989l7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2a7f55f8928 [] [] }} ContainerID="eac5f78f7da79a935ba3f5428f8339e0d7b746dae03c537582045dede2c932f8" Namespace="calico-apiserver" Pod="calico-apiserver-56cb94fc6-989l7" WorkloadEndpoint="ci--3510.3.8--n--34c19deec5-k8s-calico--apiserver--56cb94fc6--989l7-" Sep 6 01:22:54.666016 env[1586]: 2025-09-06 01:22:54.538 [INFO][4241] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="eac5f78f7da79a935ba3f5428f8339e0d7b746dae03c537582045dede2c932f8" Namespace="calico-apiserver" Pod="calico-apiserver-56cb94fc6-989l7" WorkloadEndpoint="ci--3510.3.8--n--34c19deec5-k8s-calico--apiserver--56cb94fc6--989l7-eth0" Sep 6 01:22:54.666016 env[1586]: 2025-09-06 01:22:54.572 [INFO][4254] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eac5f78f7da79a935ba3f5428f8339e0d7b746dae03c537582045dede2c932f8" HandleID="k8s-pod-network.eac5f78f7da79a935ba3f5428f8339e0d7b746dae03c537582045dede2c932f8" Workload="ci--3510.3.8--n--34c19deec5-k8s-calico--apiserver--56cb94fc6--989l7-eth0" Sep 6 01:22:54.666016 env[1586]: 2025-09-06 01:22:54.572 [INFO][4254] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="eac5f78f7da79a935ba3f5428f8339e0d7b746dae03c537582045dede2c932f8" HandleID="k8s-pod-network.eac5f78f7da79a935ba3f5428f8339e0d7b746dae03c537582045dede2c932f8" Workload="ci--3510.3.8--n--34c19deec5-k8s-calico--apiserver--56cb94fc6--989l7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dd8f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510.3.8-n-34c19deec5", "pod":"calico-apiserver-56cb94fc6-989l7", "timestamp":"2025-09-06 01:22:54.572834212 +0000 UTC"}, Hostname:"ci-3510.3.8-n-34c19deec5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 6 01:22:54.666016 env[1586]: 2025-09-06 01:22:54.573 [INFO][4254] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 01:22:54.666016 env[1586]: 2025-09-06 01:22:54.573 [INFO][4254] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 01:22:54.666016 env[1586]: 2025-09-06 01:22:54.573 [INFO][4254] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-34c19deec5' Sep 6 01:22:54.666016 env[1586]: 2025-09-06 01:22:54.582 [INFO][4254] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.eac5f78f7da79a935ba3f5428f8339e0d7b746dae03c537582045dede2c932f8" host="ci-3510.3.8-n-34c19deec5" Sep 6 01:22:54.666016 env[1586]: 2025-09-06 01:22:54.586 [INFO][4254] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-34c19deec5" Sep 6 01:22:54.666016 env[1586]: 2025-09-06 01:22:54.595 [INFO][4254] ipam/ipam.go 511: Trying affinity for 192.168.61.64/26 host="ci-3510.3.8-n-34c19deec5" Sep 6 01:22:54.666016 env[1586]: 2025-09-06 01:22:54.597 [INFO][4254] ipam/ipam.go 158: Attempting to load block cidr=192.168.61.64/26 host="ci-3510.3.8-n-34c19deec5" Sep 6 01:22:54.666016 env[1586]: 2025-09-06 01:22:54.599 [INFO][4254] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.61.64/26 host="ci-3510.3.8-n-34c19deec5" Sep 6 01:22:54.666016 env[1586]: 2025-09-06 01:22:54.599 [INFO][4254] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.61.64/26 handle="k8s-pod-network.eac5f78f7da79a935ba3f5428f8339e0d7b746dae03c537582045dede2c932f8" host="ci-3510.3.8-n-34c19deec5" Sep 6 01:22:54.666016 env[1586]: 2025-09-06 01:22:54.601 [INFO][4254] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.eac5f78f7da79a935ba3f5428f8339e0d7b746dae03c537582045dede2c932f8 Sep 6 01:22:54.666016 env[1586]: 2025-09-06 01:22:54.606 [INFO][4254] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.61.64/26 handle="k8s-pod-network.eac5f78f7da79a935ba3f5428f8339e0d7b746dae03c537582045dede2c932f8" host="ci-3510.3.8-n-34c19deec5" Sep 6 01:22:54.666016 env[1586]: 2025-09-06 01:22:54.617 [INFO][4254] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.61.66/26] block=192.168.61.64/26 handle="k8s-pod-network.eac5f78f7da79a935ba3f5428f8339e0d7b746dae03c537582045dede2c932f8" host="ci-3510.3.8-n-34c19deec5" Sep 6 01:22:54.666016 env[1586]: 2025-09-06 01:22:54.617 [INFO][4254] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.61.66/26] handle="k8s-pod-network.eac5f78f7da79a935ba3f5428f8339e0d7b746dae03c537582045dede2c932f8" host="ci-3510.3.8-n-34c19deec5" Sep 6 01:22:54.666016 env[1586]: 2025-09-06 01:22:54.617 [INFO][4254] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 01:22:54.666016 env[1586]: 2025-09-06 01:22:54.617 [INFO][4254] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.61.66/26] IPv6=[] ContainerID="eac5f78f7da79a935ba3f5428f8339e0d7b746dae03c537582045dede2c932f8" HandleID="k8s-pod-network.eac5f78f7da79a935ba3f5428f8339e0d7b746dae03c537582045dede2c932f8" Workload="ci--3510.3.8--n--34c19deec5-k8s-calico--apiserver--56cb94fc6--989l7-eth0" Sep 6 01:22:54.666655 env[1586]: 2025-09-06 01:22:54.618 [INFO][4241] cni-plugin/k8s.go 418: Populated endpoint ContainerID="eac5f78f7da79a935ba3f5428f8339e0d7b746dae03c537582045dede2c932f8" Namespace="calico-apiserver" Pod="calico-apiserver-56cb94fc6-989l7" WorkloadEndpoint="ci--3510.3.8--n--34c19deec5-k8s-calico--apiserver--56cb94fc6--989l7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--34c19deec5-k8s-calico--apiserver--56cb94fc6--989l7-eth0", GenerateName:"calico-apiserver-56cb94fc6-", Namespace:"calico-apiserver", SelfLink:"", UID:"75a2fc54-827a-4281-a019-abda9f06779c", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 1, 22, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56cb94fc6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-34c19deec5", ContainerID:"", Pod:"calico-apiserver-56cb94fc6-989l7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2a7f55f8928", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 01:22:54.666655 env[1586]: 2025-09-06 01:22:54.619 [INFO][4241] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.61.66/32] ContainerID="eac5f78f7da79a935ba3f5428f8339e0d7b746dae03c537582045dede2c932f8" Namespace="calico-apiserver" Pod="calico-apiserver-56cb94fc6-989l7" WorkloadEndpoint="ci--3510.3.8--n--34c19deec5-k8s-calico--apiserver--56cb94fc6--989l7-eth0" Sep 6 01:22:54.666655 env[1586]: 2025-09-06 01:22:54.619 [INFO][4241] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2a7f55f8928 ContainerID="eac5f78f7da79a935ba3f5428f8339e0d7b746dae03c537582045dede2c932f8" Namespace="calico-apiserver" Pod="calico-apiserver-56cb94fc6-989l7" WorkloadEndpoint="ci--3510.3.8--n--34c19deec5-k8s-calico--apiserver--56cb94fc6--989l7-eth0" Sep 6 01:22:54.666655 env[1586]: 2025-09-06 01:22:54.635 [INFO][4241] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eac5f78f7da79a935ba3f5428f8339e0d7b746dae03c537582045dede2c932f8" Namespace="calico-apiserver" Pod="calico-apiserver-56cb94fc6-989l7" WorkloadEndpoint="ci--3510.3.8--n--34c19deec5-k8s-calico--apiserver--56cb94fc6--989l7-eth0" Sep 6 01:22:54.666655 env[1586]: 2025-09-06 01:22:54.637 [INFO][4241] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="eac5f78f7da79a935ba3f5428f8339e0d7b746dae03c537582045dede2c932f8" Namespace="calico-apiserver" Pod="calico-apiserver-56cb94fc6-989l7" WorkloadEndpoint="ci--3510.3.8--n--34c19deec5-k8s-calico--apiserver--56cb94fc6--989l7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--34c19deec5-k8s-calico--apiserver--56cb94fc6--989l7-eth0", GenerateName:"calico-apiserver-56cb94fc6-", Namespace:"calico-apiserver", SelfLink:"", UID:"75a2fc54-827a-4281-a019-abda9f06779c", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 1, 22, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56cb94fc6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-34c19deec5", ContainerID:"eac5f78f7da79a935ba3f5428f8339e0d7b746dae03c537582045dede2c932f8", Pod:"calico-apiserver-56cb94fc6-989l7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2a7f55f8928", MAC:"3e:30:a8:76:67:44", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 01:22:54.666655 env[1586]: 2025-09-06 01:22:54.663 [INFO][4241] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="eac5f78f7da79a935ba3f5428f8339e0d7b746dae03c537582045dede2c932f8" Namespace="calico-apiserver" Pod="calico-apiserver-56cb94fc6-989l7" WorkloadEndpoint="ci--3510.3.8--n--34c19deec5-k8s-calico--apiserver--56cb94fc6--989l7-eth0" Sep 6 01:22:54.681000 audit[4267]: NETFILTER_CFG table=filter:108 family=2 entries=50 op=nft_register_chain pid=4267 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 6 01:22:54.688515 kernel: kauditd_printk_skb: 561 callbacks suppressed Sep 6 01:22:54.688660 kernel: audit: type=1325 audit(1757121774.681:425): table=filter:108 family=2 entries=50 op=nft_register_chain pid=4267 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 6 01:22:54.681000 audit[4267]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=28208 a0=3 a1=ffffd593a3e0 a2=0 a3=ffffa9ca6fa8 items=0 ppid=3942 pid=4267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:54.736440 kernel: audit: type=1300 audit(1757121774.681:425): arch=c00000b7 syscall=211 success=yes exit=28208 a0=3 a1=ffffd593a3e0 a2=0 a3=ffffa9ca6fa8 items=0 ppid=3942 pid=4267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:54.681000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 6 01:22:54.755480 kernel: audit: type=1327 audit(1757121774.681:425): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 6 01:22:54.774619 env[1586]: time="2025-09-06T01:22:54.774412201Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:22:54.774619 env[1586]: time="2025-09-06T01:22:54.774446721Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:22:54.774619 env[1586]: time="2025-09-06T01:22:54.774456561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:22:54.774619 env[1586]: time="2025-09-06T01:22:54.774566201Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/eac5f78f7da79a935ba3f5428f8339e0d7b746dae03c537582045dede2c932f8 pid=4275 runtime=io.containerd.runc.v2 Sep 6 01:22:54.834194 env[1586]: time="2025-09-06T01:22:54.834153553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56cb94fc6-989l7,Uid:75a2fc54-827a-4281-a019-abda9f06779c,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"eac5f78f7da79a935ba3f5428f8339e0d7b746dae03c537582045dede2c932f8\"" Sep 6 01:22:55.145677 env[1586]: time="2025-09-06T01:22:55.145633802Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:22:55.158235 env[1586]: time="2025-09-06T01:22:55.158200688Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:22:55.166384 env[1586]: time="2025-09-06T01:22:55.166345453Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:22:55.175217 env[1586]: time="2025-09-06T01:22:55.175176818Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:22:55.175865 env[1586]: time="2025-09-06T01:22:55.175801818Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 6 01:22:55.178558 env[1586]: time="2025-09-06T01:22:55.178521339Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 6 01:22:55.180133 env[1586]: time="2025-09-06T01:22:55.180106260Z" level=info msg="CreateContainer within sandbox \"4374b597f3b3215294d293756b65a73a0246d0e7f3154e3661f6945525190093\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 6 01:22:55.266873 env[1586]: time="2025-09-06T01:22:55.266792467Z" level=info msg="CreateContainer within sandbox \"4374b597f3b3215294d293756b65a73a0246d0e7f3154e3661f6945525190093\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"359ba7a902a8330479f4773aa7edd92fa290e6d1ae1082d8db87193b65e08359\"" Sep 6 01:22:55.269124 env[1586]: time="2025-09-06T01:22:55.269099468Z" level=info msg="StartContainer for \"359ba7a902a8330479f4773aa7edd92fa290e6d1ae1082d8db87193b65e08359\"" Sep 6 01:22:55.273441 env[1586]: time="2025-09-06T01:22:55.273389790Z" level=info msg="StopPodSandbox for \"fb040c74039d975a5b72acbb13b204bfc8ce6207f7a737662699734322391953\"" Sep 6 01:22:55.364225 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount717981331.mount: Deactivated successfully. Sep 6 01:22:55.403566 env[1586]: 2025-09-06 01:22:55.348 [INFO][4327] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fb040c74039d975a5b72acbb13b204bfc8ce6207f7a737662699734322391953" Sep 6 01:22:55.403566 env[1586]: 2025-09-06 01:22:55.348 [INFO][4327] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fb040c74039d975a5b72acbb13b204bfc8ce6207f7a737662699734322391953" iface="eth0" netns="/var/run/netns/cni-f4ce87f5-882c-b038-541f-c7b209318996" Sep 6 01:22:55.403566 env[1586]: 2025-09-06 01:22:55.349 [INFO][4327] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fb040c74039d975a5b72acbb13b204bfc8ce6207f7a737662699734322391953" iface="eth0" netns="/var/run/netns/cni-f4ce87f5-882c-b038-541f-c7b209318996" Sep 6 01:22:55.403566 env[1586]: 2025-09-06 01:22:55.349 [INFO][4327] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fb040c74039d975a5b72acbb13b204bfc8ce6207f7a737662699734322391953" iface="eth0" netns="/var/run/netns/cni-f4ce87f5-882c-b038-541f-c7b209318996" Sep 6 01:22:55.403566 env[1586]: 2025-09-06 01:22:55.349 [INFO][4327] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fb040c74039d975a5b72acbb13b204bfc8ce6207f7a737662699734322391953" Sep 6 01:22:55.403566 env[1586]: 2025-09-06 01:22:55.349 [INFO][4327] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fb040c74039d975a5b72acbb13b204bfc8ce6207f7a737662699734322391953" Sep 6 01:22:55.403566 env[1586]: 2025-09-06 01:22:55.385 [INFO][4356] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fb040c74039d975a5b72acbb13b204bfc8ce6207f7a737662699734322391953" HandleID="k8s-pod-network.fb040c74039d975a5b72acbb13b204bfc8ce6207f7a737662699734322391953" Workload="ci--3510.3.8--n--34c19deec5-k8s-csi--node--driver--r5mms-eth0" Sep 6 01:22:55.403566 env[1586]: 2025-09-06 01:22:55.385 [INFO][4356] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 01:22:55.403566 env[1586]: 2025-09-06 01:22:55.387 [INFO][4356] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 01:22:55.403566 env[1586]: 2025-09-06 01:22:55.396 [WARNING][4356] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fb040c74039d975a5b72acbb13b204bfc8ce6207f7a737662699734322391953" HandleID="k8s-pod-network.fb040c74039d975a5b72acbb13b204bfc8ce6207f7a737662699734322391953" Workload="ci--3510.3.8--n--34c19deec5-k8s-csi--node--driver--r5mms-eth0" Sep 6 01:22:55.403566 env[1586]: 2025-09-06 01:22:55.396 [INFO][4356] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fb040c74039d975a5b72acbb13b204bfc8ce6207f7a737662699734322391953" HandleID="k8s-pod-network.fb040c74039d975a5b72acbb13b204bfc8ce6207f7a737662699734322391953" Workload="ci--3510.3.8--n--34c19deec5-k8s-csi--node--driver--r5mms-eth0" Sep 6 01:22:55.403566 env[1586]: 2025-09-06 01:22:55.398 [INFO][4356] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 01:22:55.403566 env[1586]: 2025-09-06 01:22:55.399 [INFO][4327] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fb040c74039d975a5b72acbb13b204bfc8ce6207f7a737662699734322391953" Sep 6 01:22:55.403432 systemd[1]: run-netns-cni\x2df4ce87f5\x2d882c\x2db038\x2d541f\x2dc7b209318996.mount: Deactivated successfully. Sep 6 01:22:55.405849 env[1586]: time="2025-09-06T01:22:55.405815141Z" level=info msg="TearDown network for sandbox \"fb040c74039d975a5b72acbb13b204bfc8ce6207f7a737662699734322391953\" successfully" Sep 6 01:22:55.405956 env[1586]: time="2025-09-06T01:22:55.405940022Z" level=info msg="StopPodSandbox for \"fb040c74039d975a5b72acbb13b204bfc8ce6207f7a737662699734322391953\" returns successfully" Sep 6 01:22:55.406620 env[1586]: time="2025-09-06T01:22:55.406596302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r5mms,Uid:d9edb798-4304-4b76-a60a-df0eaa0d87c0,Namespace:calico-system,Attempt:1,}" Sep 6 01:22:55.462603 env[1586]: time="2025-09-06T01:22:55.462557052Z" level=info msg="StartContainer for \"359ba7a902a8330479f4773aa7edd92fa290e6d1ae1082d8db87193b65e08359\" returns successfully" Sep 6 01:22:55.576000 audit[4370]: NETFILTER_CFG table=filter:109 family=2 entries=19 op=nft_register_rule pid=4370 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:22:55.576000 audit[4370]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffffb253d50 a2=0 a3=1 items=0 ppid=2834 pid=4370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:55.617275 kernel: audit: type=1325 audit(1757121775.576:426): table=filter:109 family=2 entries=19 op=nft_register_rule pid=4370 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:22:55.617479 kernel: audit: type=1300 audit(1757121775.576:426): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffffb253d50 a2=0 a3=1 items=0 ppid=2834 pid=4370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:55.617534 kernel: audit: type=1327 audit(1757121775.576:426): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:22:55.576000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:22:55.616000 audit[4370]: NETFILTER_CFG table=nat:110 family=2 entries=21 op=nft_register_chain pid=4370 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:22:55.642160 kernel: audit: type=1325 audit(1757121775.616:427): table=nat:110 family=2 entries=21 op=nft_register_chain pid=4370 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:22:55.616000 audit[4370]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7044 a0=3 a1=fffffb253d50 a2=0 a3=1 items=0 ppid=2834 pid=4370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:55.669981 kernel: audit: type=1300 audit(1757121775.616:427): arch=c00000b7 syscall=211 success=yes exit=7044 a0=3 a1=fffffb253d50 a2=0 a3=1 items=0 ppid=2834 pid=4370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:55.670203 kernel: audit: type=1327 audit(1757121775.616:427): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:22:55.616000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:22:55.846519 systemd-networkd[1753]: cali2a7f55f8928: Gained IPv6LL Sep 6 01:22:55.867543 systemd-networkd[1753]: cali143458d2057: Link UP Sep 6 01:22:55.879573 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 6 01:22:55.879675 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali143458d2057: link becomes ready Sep 6 01:22:55.880047 systemd-networkd[1753]: cali143458d2057: Gained carrier Sep 6 01:22:55.904454 kubelet[2729]: I0906 01:22:55.904390 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6b9bf7db4-hpdlz" podStartSLOduration=2.043328042 podStartE2EDuration="5.904371529s" podCreationTimestamp="2025-09-06 01:22:50 +0000 UTC" firstStartedPulling="2025-09-06 01:22:51.316547292 +0000 UTC m=+41.204651502" lastFinishedPulling="2025-09-06 01:22:55.177590739 +0000 UTC m=+45.065694989" observedRunningTime="2025-09-06 01:22:55.550008379 +0000 UTC m=+45.438112589" watchObservedRunningTime="2025-09-06 01:22:55.904371529 +0000 UTC m=+45.792475739" Sep 6 01:22:55.906380 env[1586]: 2025-09-06 01:22:55.790 [INFO][4371] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--34c19deec5-k8s-csi--node--driver--r5mms-eth0 csi-node-driver- calico-system d9edb798-4304-4b76-a60a-df0eaa0d87c0 943 0 2025-09-06 01:22:32 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-3510.3.8-n-34c19deec5 csi-node-driver-r5mms eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali143458d2057 [] [] }} ContainerID="2218e28adb1761d6e0216ffabd2b8b9ac70c51b19e318aeda2a3debb31a235cc" Namespace="calico-system" Pod="csi-node-driver-r5mms" WorkloadEndpoint="ci--3510.3.8--n--34c19deec5-k8s-csi--node--driver--r5mms-" Sep 6 01:22:55.906380 env[1586]: 2025-09-06 01:22:55.790 [INFO][4371] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2218e28adb1761d6e0216ffabd2b8b9ac70c51b19e318aeda2a3debb31a235cc" Namespace="calico-system" Pod="csi-node-driver-r5mms" WorkloadEndpoint="ci--3510.3.8--n--34c19deec5-k8s-csi--node--driver--r5mms-eth0" Sep 6 01:22:55.906380 env[1586]: 2025-09-06 01:22:55.817 [INFO][4384] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2218e28adb1761d6e0216ffabd2b8b9ac70c51b19e318aeda2a3debb31a235cc" HandleID="k8s-pod-network.2218e28adb1761d6e0216ffabd2b8b9ac70c51b19e318aeda2a3debb31a235cc" Workload="ci--3510.3.8--n--34c19deec5-k8s-csi--node--driver--r5mms-eth0" Sep 6 01:22:55.906380 env[1586]: 2025-09-06 01:22:55.817 [INFO][4384] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2218e28adb1761d6e0216ffabd2b8b9ac70c51b19e318aeda2a3debb31a235cc" HandleID="k8s-pod-network.2218e28adb1761d6e0216ffabd2b8b9ac70c51b19e318aeda2a3debb31a235cc" Workload="ci--3510.3.8--n--34c19deec5-k8s-csi--node--driver--r5mms-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002caff0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.8-n-34c19deec5", "pod":"csi-node-driver-r5mms", "timestamp":"2025-09-06 01:22:55.817193362 +0000 UTC"}, Hostname:"ci-3510.3.8-n-34c19deec5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 6 01:22:55.906380 env[1586]: 2025-09-06 01:22:55.817 [INFO][4384] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 01:22:55.906380 env[1586]: 2025-09-06 01:22:55.817 [INFO][4384] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 01:22:55.906380 env[1586]: 2025-09-06 01:22:55.817 [INFO][4384] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-34c19deec5' Sep 6 01:22:55.906380 env[1586]: 2025-09-06 01:22:55.826 [INFO][4384] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2218e28adb1761d6e0216ffabd2b8b9ac70c51b19e318aeda2a3debb31a235cc" host="ci-3510.3.8-n-34c19deec5" Sep 6 01:22:55.906380 env[1586]: 2025-09-06 01:22:55.829 [INFO][4384] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-34c19deec5" Sep 6 01:22:55.906380 env[1586]: 2025-09-06 01:22:55.832 [INFO][4384] ipam/ipam.go 511: Trying affinity for 192.168.61.64/26 host="ci-3510.3.8-n-34c19deec5" Sep 6 01:22:55.906380 env[1586]: 2025-09-06 01:22:55.834 [INFO][4384] ipam/ipam.go 158: Attempting to load block cidr=192.168.61.64/26 host="ci-3510.3.8-n-34c19deec5" Sep 6 01:22:55.906380 env[1586]: 2025-09-06 01:22:55.836 [INFO][4384] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.61.64/26 host="ci-3510.3.8-n-34c19deec5" Sep 6 01:22:55.906380 env[1586]: 2025-09-06 01:22:55.836 [INFO][4384] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.61.64/26 handle="k8s-pod-network.2218e28adb1761d6e0216ffabd2b8b9ac70c51b19e318aeda2a3debb31a235cc" host="ci-3510.3.8-n-34c19deec5" Sep 6 01:22:55.906380 env[1586]: 2025-09-06 01:22:55.838 [INFO][4384] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2218e28adb1761d6e0216ffabd2b8b9ac70c51b19e318aeda2a3debb31a235cc Sep 6 01:22:55.906380 env[1586]: 2025-09-06 01:22:55.843 [INFO][4384] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.61.64/26 handle="k8s-pod-network.2218e28adb1761d6e0216ffabd2b8b9ac70c51b19e318aeda2a3debb31a235cc" host="ci-3510.3.8-n-34c19deec5" Sep 6 01:22:55.906380 env[1586]: 2025-09-06 01:22:55.853 [INFO][4384] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.61.67/26] block=192.168.61.64/26 handle="k8s-pod-network.2218e28adb1761d6e0216ffabd2b8b9ac70c51b19e318aeda2a3debb31a235cc" host="ci-3510.3.8-n-34c19deec5" Sep 6 01:22:55.906380 env[1586]: 2025-09-06 01:22:55.853 [INFO][4384] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.61.67/26] handle="k8s-pod-network.2218e28adb1761d6e0216ffabd2b8b9ac70c51b19e318aeda2a3debb31a235cc" host="ci-3510.3.8-n-34c19deec5" Sep 6 01:22:55.906380 env[1586]: 2025-09-06 01:22:55.853 [INFO][4384] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 01:22:55.906380 env[1586]: 2025-09-06 01:22:55.853 [INFO][4384] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.61.67/26] IPv6=[] ContainerID="2218e28adb1761d6e0216ffabd2b8b9ac70c51b19e318aeda2a3debb31a235cc" HandleID="k8s-pod-network.2218e28adb1761d6e0216ffabd2b8b9ac70c51b19e318aeda2a3debb31a235cc" Workload="ci--3510.3.8--n--34c19deec5-k8s-csi--node--driver--r5mms-eth0" Sep 6 01:22:55.906904 env[1586]: 2025-09-06 01:22:55.855 [INFO][4371] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2218e28adb1761d6e0216ffabd2b8b9ac70c51b19e318aeda2a3debb31a235cc" Namespace="calico-system" Pod="csi-node-driver-r5mms" WorkloadEndpoint="ci--3510.3.8--n--34c19deec5-k8s-csi--node--driver--r5mms-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--34c19deec5-k8s-csi--node--driver--r5mms-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d9edb798-4304-4b76-a60a-df0eaa0d87c0", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 1, 22, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-34c19deec5", ContainerID:"", Pod:"csi-node-driver-r5mms", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.61.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali143458d2057", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 01:22:55.906904 env[1586]: 2025-09-06 01:22:55.855 [INFO][4371] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.61.67/32] ContainerID="2218e28adb1761d6e0216ffabd2b8b9ac70c51b19e318aeda2a3debb31a235cc" Namespace="calico-system" Pod="csi-node-driver-r5mms" WorkloadEndpoint="ci--3510.3.8--n--34c19deec5-k8s-csi--node--driver--r5mms-eth0" Sep 6 01:22:55.906904 env[1586]: 2025-09-06 01:22:55.855 [INFO][4371] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali143458d2057 ContainerID="2218e28adb1761d6e0216ffabd2b8b9ac70c51b19e318aeda2a3debb31a235cc" Namespace="calico-system" Pod="csi-node-driver-r5mms" WorkloadEndpoint="ci--3510.3.8--n--34c19deec5-k8s-csi--node--driver--r5mms-eth0" Sep 6 01:22:55.906904 env[1586]: 2025-09-06 01:22:55.880 [INFO][4371] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2218e28adb1761d6e0216ffabd2b8b9ac70c51b19e318aeda2a3debb31a235cc" Namespace="calico-system" Pod="csi-node-driver-r5mms" WorkloadEndpoint="ci--3510.3.8--n--34c19deec5-k8s-csi--node--driver--r5mms-eth0" Sep 6 01:22:55.906904 env[1586]: 2025-09-06 01:22:55.880 [INFO][4371] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2218e28adb1761d6e0216ffabd2b8b9ac70c51b19e318aeda2a3debb31a235cc" Namespace="calico-system" Pod="csi-node-driver-r5mms" WorkloadEndpoint="ci--3510.3.8--n--34c19deec5-k8s-csi--node--driver--r5mms-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--34c19deec5-k8s-csi--node--driver--r5mms-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d9edb798-4304-4b76-a60a-df0eaa0d87c0", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 1, 22, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-34c19deec5", ContainerID:"2218e28adb1761d6e0216ffabd2b8b9ac70c51b19e318aeda2a3debb31a235cc", Pod:"csi-node-driver-r5mms", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.61.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali143458d2057", MAC:"ce:b0:40:e3:32:54", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 01:22:55.906904 env[1586]: 2025-09-06 01:22:55.903 [INFO][4371] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2218e28adb1761d6e0216ffabd2b8b9ac70c51b19e318aeda2a3debb31a235cc" Namespace="calico-system" Pod="csi-node-driver-r5mms" WorkloadEndpoint="ci--3510.3.8--n--34c19deec5-k8s-csi--node--driver--r5mms-eth0" Sep 6 01:22:55.916000 audit[4398]: NETFILTER_CFG table=filter:111 family=2 entries=40 op=nft_register_chain pid=4398 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 6 01:22:55.916000 audit[4398]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=20764 a0=3 a1=fffffd021860 a2=0 a3=ffff98268fa8 items=0 ppid=3942 pid=4398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:55.931403 kernel: audit: type=1325 audit(1757121775.916:428): table=filter:111 family=2 entries=40 op=nft_register_chain pid=4398 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 6 01:22:55.916000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 6 01:22:55.938516 env[1586]: time="2025-09-06T01:22:55.937448827Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:22:55.938516 env[1586]: time="2025-09-06T01:22:55.937483827Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:22:55.938516 env[1586]: time="2025-09-06T01:22:55.937505347Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:22:55.938516 env[1586]: time="2025-09-06T01:22:55.937618867Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2218e28adb1761d6e0216ffabd2b8b9ac70c51b19e318aeda2a3debb31a235cc pid=4405 runtime=io.containerd.runc.v2 Sep 6 01:22:55.983929 env[1586]: time="2025-09-06T01:22:55.983884572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r5mms,Uid:d9edb798-4304-4b76-a60a-df0eaa0d87c0,Namespace:calico-system,Attempt:1,} returns sandbox id \"2218e28adb1761d6e0216ffabd2b8b9ac70c51b19e318aeda2a3debb31a235cc\"" Sep 6 01:22:56.280068 env[1586]: time="2025-09-06T01:22:56.278699929Z" level=info msg="StopPodSandbox for \"23fead3f824bc1f5ea71579e672e205b84cb4ec276e9bd218bd80bfaea886227\"" Sep 6 01:22:56.368499 env[1586]: 2025-09-06 01:22:56.331 [INFO][4449] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="23fead3f824bc1f5ea71579e672e205b84cb4ec276e9bd218bd80bfaea886227" Sep 6 01:22:56.368499 env[1586]: 2025-09-06 01:22:56.331 [INFO][4449] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="23fead3f824bc1f5ea71579e672e205b84cb4ec276e9bd218bd80bfaea886227" iface="eth0" netns="/var/run/netns/cni-cdae94be-0bbb-a17e-0920-3d88c3c7f232" Sep 6 01:22:56.368499 env[1586]: 2025-09-06 01:22:56.331 [INFO][4449] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="23fead3f824bc1f5ea71579e672e205b84cb4ec276e9bd218bd80bfaea886227" iface="eth0" netns="/var/run/netns/cni-cdae94be-0bbb-a17e-0920-3d88c3c7f232" Sep 6 01:22:56.368499 env[1586]: 2025-09-06 01:22:56.331 [INFO][4449] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="23fead3f824bc1f5ea71579e672e205b84cb4ec276e9bd218bd80bfaea886227" iface="eth0" netns="/var/run/netns/cni-cdae94be-0bbb-a17e-0920-3d88c3c7f232" Sep 6 01:22:56.368499 env[1586]: 2025-09-06 01:22:56.331 [INFO][4449] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="23fead3f824bc1f5ea71579e672e205b84cb4ec276e9bd218bd80bfaea886227" Sep 6 01:22:56.368499 env[1586]: 2025-09-06 01:22:56.331 [INFO][4449] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="23fead3f824bc1f5ea71579e672e205b84cb4ec276e9bd218bd80bfaea886227" Sep 6 01:22:56.368499 env[1586]: 2025-09-06 01:22:56.350 [INFO][4456] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="23fead3f824bc1f5ea71579e672e205b84cb4ec276e9bd218bd80bfaea886227" HandleID="k8s-pod-network.23fead3f824bc1f5ea71579e672e205b84cb4ec276e9bd218bd80bfaea886227" Workload="ci--3510.3.8--n--34c19deec5-k8s-goldmane--7988f88666--nn22p-eth0" Sep 6 01:22:56.368499 env[1586]: 2025-09-06 01:22:56.350 [INFO][4456] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 01:22:56.368499 env[1586]: 2025-09-06 01:22:56.350 [INFO][4456] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 01:22:56.368499 env[1586]: 2025-09-06 01:22:56.364 [WARNING][4456] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="23fead3f824bc1f5ea71579e672e205b84cb4ec276e9bd218bd80bfaea886227" HandleID="k8s-pod-network.23fead3f824bc1f5ea71579e672e205b84cb4ec276e9bd218bd80bfaea886227" Workload="ci--3510.3.8--n--34c19deec5-k8s-goldmane--7988f88666--nn22p-eth0" Sep 6 01:22:56.368499 env[1586]: 2025-09-06 01:22:56.364 [INFO][4456] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="23fead3f824bc1f5ea71579e672e205b84cb4ec276e9bd218bd80bfaea886227" HandleID="k8s-pod-network.23fead3f824bc1f5ea71579e672e205b84cb4ec276e9bd218bd80bfaea886227" Workload="ci--3510.3.8--n--34c19deec5-k8s-goldmane--7988f88666--nn22p-eth0" Sep 6 01:22:56.368499 env[1586]: 2025-09-06 01:22:56.365 [INFO][4456] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 01:22:56.368499 env[1586]: 2025-09-06 01:22:56.367 [INFO][4449] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="23fead3f824bc1f5ea71579e672e205b84cb4ec276e9bd218bd80bfaea886227" Sep 6 01:22:56.371156 systemd[1]: run-netns-cni\x2dcdae94be\x2d0bbb\x2da17e\x2d0920\x2d3d88c3c7f232.mount: Deactivated successfully. Sep 6 01:22:56.372695 env[1586]: time="2025-09-06T01:22:56.372654859Z" level=info msg="TearDown network for sandbox \"23fead3f824bc1f5ea71579e672e205b84cb4ec276e9bd218bd80bfaea886227\" successfully" Sep 6 01:22:56.372782 env[1586]: time="2025-09-06T01:22:56.372764379Z" level=info msg="StopPodSandbox for \"23fead3f824bc1f5ea71579e672e205b84cb4ec276e9bd218bd80bfaea886227\" returns successfully" Sep 6 01:22:56.373895 env[1586]: time="2025-09-06T01:22:56.373862019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-nn22p,Uid:6be53ee4-d329-4ede-8f50-6b8eeba7191c,Namespace:calico-system,Attempt:1,}" Sep 6 01:22:56.596212 systemd-networkd[1753]: caliba7521ece98: Link UP Sep 6 01:22:56.605686 systemd-networkd[1753]: caliba7521ece98: Gained carrier Sep 6 01:22:56.606288 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): caliba7521ece98: link becomes ready Sep 6 01:22:56.643070 env[1586]: 2025-09-06 01:22:56.498 [INFO][4462] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--34c19deec5-k8s-goldmane--7988f88666--nn22p-eth0 goldmane-7988f88666- calico-system 6be53ee4-d329-4ede-8f50-6b8eeba7191c 957 0 2025-09-06 01:22:32 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-3510.3.8-n-34c19deec5 goldmane-7988f88666-nn22p eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] caliba7521ece98 [] [] }} ContainerID="b3c856f56c2147b337451754c1d85321384641e72bb287a2f1f595ecf3a0e1a5" Namespace="calico-system" Pod="goldmane-7988f88666-nn22p" WorkloadEndpoint="ci--3510.3.8--n--34c19deec5-k8s-goldmane--7988f88666--nn22p-" Sep 6 01:22:56.643070 env[1586]: 2025-09-06 01:22:56.499 [INFO][4462] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b3c856f56c2147b337451754c1d85321384641e72bb287a2f1f595ecf3a0e1a5" Namespace="calico-system" Pod="goldmane-7988f88666-nn22p" WorkloadEndpoint="ci--3510.3.8--n--34c19deec5-k8s-goldmane--7988f88666--nn22p-eth0" Sep 6 01:22:56.643070 env[1586]: 2025-09-06 01:22:56.523 [INFO][4475] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b3c856f56c2147b337451754c1d85321384641e72bb287a2f1f595ecf3a0e1a5" HandleID="k8s-pod-network.b3c856f56c2147b337451754c1d85321384641e72bb287a2f1f595ecf3a0e1a5" Workload="ci--3510.3.8--n--34c19deec5-k8s-goldmane--7988f88666--nn22p-eth0" Sep 6 01:22:56.643070 env[1586]: 2025-09-06 01:22:56.524 [INFO][4475] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b3c856f56c2147b337451754c1d85321384641e72bb287a2f1f595ecf3a0e1a5" HandleID="k8s-pod-network.b3c856f56c2147b337451754c1d85321384641e72bb287a2f1f595ecf3a0e1a5" Workload="ci--3510.3.8--n--34c19deec5-k8s-goldmane--7988f88666--nn22p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d2ff0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.8-n-34c19deec5", "pod":"goldmane-7988f88666-nn22p", "timestamp":"2025-09-06 01:22:56.523657379 +0000 UTC"}, Hostname:"ci-3510.3.8-n-34c19deec5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 6 01:22:56.643070 env[1586]: 2025-09-06 01:22:56.524 [INFO][4475] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 01:22:56.643070 env[1586]: 2025-09-06 01:22:56.524 [INFO][4475] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 01:22:56.643070 env[1586]: 2025-09-06 01:22:56.524 [INFO][4475] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-34c19deec5' Sep 6 01:22:56.643070 env[1586]: 2025-09-06 01:22:56.548 [INFO][4475] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b3c856f56c2147b337451754c1d85321384641e72bb287a2f1f595ecf3a0e1a5" host="ci-3510.3.8-n-34c19deec5" Sep 6 01:22:56.643070 env[1586]: 2025-09-06 01:22:56.553 [INFO][4475] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-34c19deec5" Sep 6 01:22:56.643070 env[1586]: 2025-09-06 01:22:56.556 [INFO][4475] ipam/ipam.go 511: Trying affinity for 192.168.61.64/26 host="ci-3510.3.8-n-34c19deec5" Sep 6 01:22:56.643070 env[1586]: 2025-09-06 01:22:56.558 [INFO][4475] ipam/ipam.go 158: Attempting to load block cidr=192.168.61.64/26 host="ci-3510.3.8-n-34c19deec5" Sep 6 01:22:56.643070 env[1586]: 2025-09-06 01:22:56.560 [INFO][4475] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.61.64/26 host="ci-3510.3.8-n-34c19deec5" Sep 6 01:22:56.643070 env[1586]: 2025-09-06 01:22:56.560 [INFO][4475] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.61.64/26 handle="k8s-pod-network.b3c856f56c2147b337451754c1d85321384641e72bb287a2f1f595ecf3a0e1a5" host="ci-3510.3.8-n-34c19deec5" Sep 6 01:22:56.643070 env[1586]: 2025-09-06 01:22:56.563 [INFO][4475] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b3c856f56c2147b337451754c1d85321384641e72bb287a2f1f595ecf3a0e1a5 Sep 6 01:22:56.643070 env[1586]: 2025-09-06 01:22:56.571 [INFO][4475] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.61.64/26 handle="k8s-pod-network.b3c856f56c2147b337451754c1d85321384641e72bb287a2f1f595ecf3a0e1a5" host="ci-3510.3.8-n-34c19deec5" Sep 6 01:22:56.643070 env[1586]: 2025-09-06 01:22:56.586 [INFO][4475] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.61.68/26] block=192.168.61.64/26 handle="k8s-pod-network.b3c856f56c2147b337451754c1d85321384641e72bb287a2f1f595ecf3a0e1a5" host="ci-3510.3.8-n-34c19deec5" Sep 6 01:22:56.643070 env[1586]: 2025-09-06 01:22:56.586 [INFO][4475] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.61.68/26] handle="k8s-pod-network.b3c856f56c2147b337451754c1d85321384641e72bb287a2f1f595ecf3a0e1a5" host="ci-3510.3.8-n-34c19deec5" Sep 6 01:22:56.643070 env[1586]: 2025-09-06 01:22:56.586 [INFO][4475] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 01:22:56.643070 env[1586]: 2025-09-06 01:22:56.586 [INFO][4475] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.61.68/26] IPv6=[] ContainerID="b3c856f56c2147b337451754c1d85321384641e72bb287a2f1f595ecf3a0e1a5" HandleID="k8s-pod-network.b3c856f56c2147b337451754c1d85321384641e72bb287a2f1f595ecf3a0e1a5" Workload="ci--3510.3.8--n--34c19deec5-k8s-goldmane--7988f88666--nn22p-eth0" Sep 6 01:22:56.643743 env[1586]: 2025-09-06 01:22:56.589 [INFO][4462] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b3c856f56c2147b337451754c1d85321384641e72bb287a2f1f595ecf3a0e1a5" Namespace="calico-system" Pod="goldmane-7988f88666-nn22p" WorkloadEndpoint="ci--3510.3.8--n--34c19deec5-k8s-goldmane--7988f88666--nn22p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--34c19deec5-k8s-goldmane--7988f88666--nn22p-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"6be53ee4-d329-4ede-8f50-6b8eeba7191c", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 1, 22, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-34c19deec5", ContainerID:"", Pod:"goldmane-7988f88666-nn22p", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.61.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliba7521ece98", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 01:22:56.643743 env[1586]: 2025-09-06 01:22:56.589 [INFO][4462] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.61.68/32] ContainerID="b3c856f56c2147b337451754c1d85321384641e72bb287a2f1f595ecf3a0e1a5" Namespace="calico-system" Pod="goldmane-7988f88666-nn22p" WorkloadEndpoint="ci--3510.3.8--n--34c19deec5-k8s-goldmane--7988f88666--nn22p-eth0" Sep 6 01:22:56.643743 env[1586]: 2025-09-06 01:22:56.589 [INFO][4462] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliba7521ece98 ContainerID="b3c856f56c2147b337451754c1d85321384641e72bb287a2f1f595ecf3a0e1a5" Namespace="calico-system" Pod="goldmane-7988f88666-nn22p" WorkloadEndpoint="ci--3510.3.8--n--34c19deec5-k8s-goldmane--7988f88666--nn22p-eth0" Sep 6 01:22:56.643743 env[1586]: 2025-09-06 01:22:56.606 [INFO][4462] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b3c856f56c2147b337451754c1d85321384641e72bb287a2f1f595ecf3a0e1a5" Namespace="calico-system" Pod="goldmane-7988f88666-nn22p" WorkloadEndpoint="ci--3510.3.8--n--34c19deec5-k8s-goldmane--7988f88666--nn22p-eth0" Sep 6 01:22:56.643743 env[1586]: 2025-09-06 01:22:56.610 [INFO][4462] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b3c856f56c2147b337451754c1d85321384641e72bb287a2f1f595ecf3a0e1a5" Namespace="calico-system" Pod="goldmane-7988f88666-nn22p" WorkloadEndpoint="ci--3510.3.8--n--34c19deec5-k8s-goldmane--7988f88666--nn22p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--34c19deec5-k8s-goldmane--7988f88666--nn22p-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"6be53ee4-d329-4ede-8f50-6b8eeba7191c", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 1, 22, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-34c19deec5", ContainerID:"b3c856f56c2147b337451754c1d85321384641e72bb287a2f1f595ecf3a0e1a5", Pod:"goldmane-7988f88666-nn22p", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.61.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliba7521ece98", MAC:"4a:b4:38:38:a0:6c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 01:22:56.643743 env[1586]: 2025-09-06 01:22:56.640 [INFO][4462] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b3c856f56c2147b337451754c1d85321384641e72bb287a2f1f595ecf3a0e1a5" Namespace="calico-system" Pod="goldmane-7988f88666-nn22p" WorkloadEndpoint="ci--3510.3.8--n--34c19deec5-k8s-goldmane--7988f88666--nn22p-eth0" Sep 6 01:22:56.661000 audit[4490]: NETFILTER_CFG table=filter:112 family=2 entries=52 op=nft_register_chain pid=4490 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 6 01:22:56.661000 audit[4490]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=27556 a0=3 a1=ffffe3bd6de0 a2=0 a3=ffffaea56fa8 items=0 ppid=3942 pid=4490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:56.661000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 6 01:22:56.686093 env[1586]: time="2025-09-06T01:22:56.686007825Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:22:56.686412 env[1586]: time="2025-09-06T01:22:56.686060225Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:22:56.686412 env[1586]: time="2025-09-06T01:22:56.686072025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:22:56.686412 env[1586]: time="2025-09-06T01:22:56.686346705Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b3c856f56c2147b337451754c1d85321384641e72bb287a2f1f595ecf3a0e1a5 pid=4497 runtime=io.containerd.runc.v2 Sep 6 01:22:56.737223 env[1586]: time="2025-09-06T01:22:56.737173652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-nn22p,Uid:6be53ee4-d329-4ede-8f50-6b8eeba7191c,Namespace:calico-system,Attempt:1,} returns sandbox id \"b3c856f56c2147b337451754c1d85321384641e72bb287a2f1f595ecf3a0e1a5\"" Sep 6 01:22:57.271979 env[1586]: time="2025-09-06T01:22:57.271941295Z" level=info msg="StopPodSandbox for \"8d640dc67cd9dc8e43167eab759ff83b7ac1cbdb35781beeef948e46087dd270\"" Sep 6 01:22:57.394715 env[1586]: 2025-09-06 01:22:57.348 [INFO][4546] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8d640dc67cd9dc8e43167eab759ff83b7ac1cbdb35781beeef948e46087dd270" Sep 6 01:22:57.394715 env[1586]: 2025-09-06 01:22:57.348 [INFO][4546] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8d640dc67cd9dc8e43167eab759ff83b7ac1cbdb35781beeef948e46087dd270" iface="eth0" netns="/var/run/netns/cni-1cf9fd03-3777-a454-c660-700cd631ee27" Sep 6 01:22:57.394715 env[1586]: 2025-09-06 01:22:57.348 [INFO][4546] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8d640dc67cd9dc8e43167eab759ff83b7ac1cbdb35781beeef948e46087dd270" iface="eth0" netns="/var/run/netns/cni-1cf9fd03-3777-a454-c660-700cd631ee27" Sep 6 01:22:57.394715 env[1586]: 2025-09-06 01:22:57.349 [INFO][4546] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8d640dc67cd9dc8e43167eab759ff83b7ac1cbdb35781beeef948e46087dd270" iface="eth0" netns="/var/run/netns/cni-1cf9fd03-3777-a454-c660-700cd631ee27" Sep 6 01:22:57.394715 env[1586]: 2025-09-06 01:22:57.349 [INFO][4546] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8d640dc67cd9dc8e43167eab759ff83b7ac1cbdb35781beeef948e46087dd270" Sep 6 01:22:57.394715 env[1586]: 2025-09-06 01:22:57.349 [INFO][4546] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8d640dc67cd9dc8e43167eab759ff83b7ac1cbdb35781beeef948e46087dd270" Sep 6 01:22:57.394715 env[1586]: 2025-09-06 01:22:57.376 [INFO][4554] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8d640dc67cd9dc8e43167eab759ff83b7ac1cbdb35781beeef948e46087dd270" HandleID="k8s-pod-network.8d640dc67cd9dc8e43167eab759ff83b7ac1cbdb35781beeef948e46087dd270" Workload="ci--3510.3.8--n--34c19deec5-k8s-calico--apiserver--56cb94fc6--8w5rf-eth0" Sep 6 01:22:57.394715 env[1586]: 2025-09-06 01:22:57.376 [INFO][4554] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 01:22:57.394715 env[1586]: 2025-09-06 01:22:57.376 [INFO][4554] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 01:22:57.394715 env[1586]: 2025-09-06 01:22:57.390 [WARNING][4554] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8d640dc67cd9dc8e43167eab759ff83b7ac1cbdb35781beeef948e46087dd270" HandleID="k8s-pod-network.8d640dc67cd9dc8e43167eab759ff83b7ac1cbdb35781beeef948e46087dd270" Workload="ci--3510.3.8--n--34c19deec5-k8s-calico--apiserver--56cb94fc6--8w5rf-eth0" Sep 6 01:22:57.394715 env[1586]: 2025-09-06 01:22:57.390 [INFO][4554] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8d640dc67cd9dc8e43167eab759ff83b7ac1cbdb35781beeef948e46087dd270" HandleID="k8s-pod-network.8d640dc67cd9dc8e43167eab759ff83b7ac1cbdb35781beeef948e46087dd270" Workload="ci--3510.3.8--n--34c19deec5-k8s-calico--apiserver--56cb94fc6--8w5rf-eth0" Sep 6 01:22:57.394715 env[1586]: 2025-09-06 01:22:57.392 [INFO][4554] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 01:22:57.394715 env[1586]: 2025-09-06 01:22:57.393 [INFO][4546] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8d640dc67cd9dc8e43167eab759ff83b7ac1cbdb35781beeef948e46087dd270" Sep 6 01:22:57.397878 systemd[1]: run-netns-cni\x2d1cf9fd03\x2d3777\x2da454\x2dc660\x2d700cd631ee27.mount: Deactivated successfully. Sep 6 01:22:57.399546 env[1586]: time="2025-09-06T01:22:57.399507522Z" level=info msg="TearDown network for sandbox \"8d640dc67cd9dc8e43167eab759ff83b7ac1cbdb35781beeef948e46087dd270\" successfully" Sep 6 01:22:57.399626 env[1586]: time="2025-09-06T01:22:57.399610122Z" level=info msg="StopPodSandbox for \"8d640dc67cd9dc8e43167eab759ff83b7ac1cbdb35781beeef948e46087dd270\" returns successfully" Sep 6 01:22:57.400414 env[1586]: time="2025-09-06T01:22:57.400389682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56cb94fc6-8w5rf,Uid:f5d38221-ee04-42c6-b763-9c6f6f204114,Namespace:calico-apiserver,Attempt:1,}" Sep 6 01:22:57.615948 systemd-networkd[1753]: calie261c37c714: Link UP Sep 6 01:22:57.628941 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 6 01:22:57.629055 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calie261c37c714: link becomes ready Sep 6 01:22:57.629428 systemd-networkd[1753]: calie261c37c714: Gained carrier Sep 6 01:22:57.639539 systemd-networkd[1753]: cali143458d2057: Gained IPv6LL Sep 6 01:22:57.650904 env[1586]: 2025-09-06 01:22:57.518 [INFO][4564] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--34c19deec5-k8s-calico--apiserver--56cb94fc6--8w5rf-eth0 calico-apiserver-56cb94fc6- calico-apiserver f5d38221-ee04-42c6-b763-9c6f6f204114 964 0 2025-09-06 01:22:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:56cb94fc6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510.3.8-n-34c19deec5 calico-apiserver-56cb94fc6-8w5rf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie261c37c714 [] [] }} ContainerID="5ff95b7253718709f7058e0fc2bf64384c91a266951c5b7568c976fb5d1ca580" Namespace="calico-apiserver" Pod="calico-apiserver-56cb94fc6-8w5rf" WorkloadEndpoint="ci--3510.3.8--n--34c19deec5-k8s-calico--apiserver--56cb94fc6--8w5rf-" Sep 6 01:22:57.650904 env[1586]: 2025-09-06 01:22:57.518 [INFO][4564] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5ff95b7253718709f7058e0fc2bf64384c91a266951c5b7568c976fb5d1ca580" Namespace="calico-apiserver" Pod="calico-apiserver-56cb94fc6-8w5rf" WorkloadEndpoint="ci--3510.3.8--n--34c19deec5-k8s-calico--apiserver--56cb94fc6--8w5rf-eth0" Sep 6 01:22:57.650904 env[1586]: 2025-09-06 01:22:57.560 [INFO][4573] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5ff95b7253718709f7058e0fc2bf64384c91a266951c5b7568c976fb5d1ca580" HandleID="k8s-pod-network.5ff95b7253718709f7058e0fc2bf64384c91a266951c5b7568c976fb5d1ca580" Workload="ci--3510.3.8--n--34c19deec5-k8s-calico--apiserver--56cb94fc6--8w5rf-eth0" Sep 6 01:22:57.650904 env[1586]: 2025-09-06 01:22:57.560 [INFO][4573] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5ff95b7253718709f7058e0fc2bf64384c91a266951c5b7568c976fb5d1ca580" HandleID="k8s-pod-network.5ff95b7253718709f7058e0fc2bf64384c91a266951c5b7568c976fb5d1ca580" Workload="ci--3510.3.8--n--34c19deec5-k8s-calico--apiserver--56cb94fc6--8w5rf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3290), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510.3.8-n-34c19deec5", "pod":"calico-apiserver-56cb94fc6-8w5rf", "timestamp":"2025-09-06 01:22:57.560379327 +0000 UTC"}, Hostname:"ci-3510.3.8-n-34c19deec5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 6 01:22:57.650904 env[1586]: 2025-09-06 01:22:57.560 [INFO][4573] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 01:22:57.650904 env[1586]: 2025-09-06 01:22:57.567 [INFO][4573] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 01:22:57.650904 env[1586]: 2025-09-06 01:22:57.567 [INFO][4573] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-34c19deec5' Sep 6 01:22:57.650904 env[1586]: 2025-09-06 01:22:57.576 [INFO][4573] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5ff95b7253718709f7058e0fc2bf64384c91a266951c5b7568c976fb5d1ca580" host="ci-3510.3.8-n-34c19deec5" Sep 6 01:22:57.650904 env[1586]: 2025-09-06 01:22:57.579 [INFO][4573] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-34c19deec5" Sep 6 01:22:57.650904 env[1586]: 2025-09-06 01:22:57.583 [INFO][4573] ipam/ipam.go 511: Trying affinity for 192.168.61.64/26 host="ci-3510.3.8-n-34c19deec5" Sep 6 01:22:57.650904 env[1586]: 2025-09-06 01:22:57.584 [INFO][4573] ipam/ipam.go 158: Attempting to load block cidr=192.168.61.64/26 host="ci-3510.3.8-n-34c19deec5" Sep 6 01:22:57.650904 env[1586]: 2025-09-06 01:22:57.587 [INFO][4573] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.61.64/26 host="ci-3510.3.8-n-34c19deec5" Sep 6 01:22:57.650904 env[1586]: 2025-09-06 01:22:57.587 [INFO][4573] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.61.64/26 handle="k8s-pod-network.5ff95b7253718709f7058e0fc2bf64384c91a266951c5b7568c976fb5d1ca580" host="ci-3510.3.8-n-34c19deec5" Sep 6 01:22:57.650904 env[1586]: 2025-09-06 01:22:57.588 [INFO][4573] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5ff95b7253718709f7058e0fc2bf64384c91a266951c5b7568c976fb5d1ca580 Sep 6 01:22:57.650904 env[1586]: 2025-09-06 01:22:57.593 [INFO][4573] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.61.64/26 handle="k8s-pod-network.5ff95b7253718709f7058e0fc2bf64384c91a266951c5b7568c976fb5d1ca580" host="ci-3510.3.8-n-34c19deec5" Sep 6 01:22:57.650904 env[1586]: 2025-09-06 01:22:57.603 [INFO][4573] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.61.69/26] block=192.168.61.64/26 handle="k8s-pod-network.5ff95b7253718709f7058e0fc2bf64384c91a266951c5b7568c976fb5d1ca580" host="ci-3510.3.8-n-34c19deec5" Sep 6 01:22:57.650904 env[1586]: 2025-09-06 01:22:57.603 [INFO][4573] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.61.69/26] handle="k8s-pod-network.5ff95b7253718709f7058e0fc2bf64384c91a266951c5b7568c976fb5d1ca580" host="ci-3510.3.8-n-34c19deec5" Sep 6 01:22:57.650904 env[1586]: 2025-09-06 01:22:57.603 [INFO][4573] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 01:22:57.650904 env[1586]: 2025-09-06 01:22:57.603 [INFO][4573] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.61.69/26] IPv6=[] ContainerID="5ff95b7253718709f7058e0fc2bf64384c91a266951c5b7568c976fb5d1ca580" HandleID="k8s-pod-network.5ff95b7253718709f7058e0fc2bf64384c91a266951c5b7568c976fb5d1ca580" Workload="ci--3510.3.8--n--34c19deec5-k8s-calico--apiserver--56cb94fc6--8w5rf-eth0" Sep 6 01:22:57.651640 env[1586]: 2025-09-06 01:22:57.605 [INFO][4564] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5ff95b7253718709f7058e0fc2bf64384c91a266951c5b7568c976fb5d1ca580" Namespace="calico-apiserver" Pod="calico-apiserver-56cb94fc6-8w5rf" WorkloadEndpoint="ci--3510.3.8--n--34c19deec5-k8s-calico--apiserver--56cb94fc6--8w5rf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--34c19deec5-k8s-calico--apiserver--56cb94fc6--8w5rf-eth0", GenerateName:"calico-apiserver-56cb94fc6-", Namespace:"calico-apiserver", SelfLink:"", UID:"f5d38221-ee04-42c6-b763-9c6f6f204114", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 1, 22, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56cb94fc6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-34c19deec5", ContainerID:"", Pod:"calico-apiserver-56cb94fc6-8w5rf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie261c37c714", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 01:22:57.651640 env[1586]: 2025-09-06 01:22:57.605 [INFO][4564] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.61.69/32] ContainerID="5ff95b7253718709f7058e0fc2bf64384c91a266951c5b7568c976fb5d1ca580" Namespace="calico-apiserver" Pod="calico-apiserver-56cb94fc6-8w5rf" WorkloadEndpoint="ci--3510.3.8--n--34c19deec5-k8s-calico--apiserver--56cb94fc6--8w5rf-eth0" Sep 6 01:22:57.651640 env[1586]: 2025-09-06 01:22:57.605 [INFO][4564] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie261c37c714 ContainerID="5ff95b7253718709f7058e0fc2bf64384c91a266951c5b7568c976fb5d1ca580" Namespace="calico-apiserver" Pod="calico-apiserver-56cb94fc6-8w5rf" WorkloadEndpoint="ci--3510.3.8--n--34c19deec5-k8s-calico--apiserver--56cb94fc6--8w5rf-eth0" Sep 6 01:22:57.651640 env[1586]: 2025-09-06 01:22:57.630 [INFO][4564] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5ff95b7253718709f7058e0fc2bf64384c91a266951c5b7568c976fb5d1ca580" Namespace="calico-apiserver" Pod="calico-apiserver-56cb94fc6-8w5rf" WorkloadEndpoint="ci--3510.3.8--n--34c19deec5-k8s-calico--apiserver--56cb94fc6--8w5rf-eth0" Sep 6 01:22:57.651640 env[1586]: 2025-09-06 01:22:57.634 [INFO][4564] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5ff95b7253718709f7058e0fc2bf64384c91a266951c5b7568c976fb5d1ca580" Namespace="calico-apiserver" Pod="calico-apiserver-56cb94fc6-8w5rf" WorkloadEndpoint="ci--3510.3.8--n--34c19deec5-k8s-calico--apiserver--56cb94fc6--8w5rf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--34c19deec5-k8s-calico--apiserver--56cb94fc6--8w5rf-eth0", GenerateName:"calico-apiserver-56cb94fc6-", Namespace:"calico-apiserver", SelfLink:"", UID:"f5d38221-ee04-42c6-b763-9c6f6f204114", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 1, 22, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56cb94fc6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-34c19deec5", ContainerID:"5ff95b7253718709f7058e0fc2bf64384c91a266951c5b7568c976fb5d1ca580", Pod:"calico-apiserver-56cb94fc6-8w5rf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie261c37c714", MAC:"16:f4:b4:45:88:25", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 01:22:57.651640 env[1586]: 2025-09-06 01:22:57.646 [INFO][4564] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5ff95b7253718709f7058e0fc2bf64384c91a266951c5b7568c976fb5d1ca580" Namespace="calico-apiserver" Pod="calico-apiserver-56cb94fc6-8w5rf" WorkloadEndpoint="ci--3510.3.8--n--34c19deec5-k8s-calico--apiserver--56cb94fc6--8w5rf-eth0" Sep 6 01:22:57.660000 audit[4588]: NETFILTER_CFG table=filter:113 family=2 entries=49 op=nft_register_chain pid=4588 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 6 01:22:57.660000 audit[4588]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=25452 a0=3 a1=ffffc6aca510 a2=0 a3=ffff99948fa8 items=0 ppid=3942 pid=4588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:57.660000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 6 01:22:57.689659 env[1586]: time="2025-09-06T01:22:57.689484954Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:22:57.689659 env[1586]: time="2025-09-06T01:22:57.689523514Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:22:57.689659 env[1586]: time="2025-09-06T01:22:57.689533794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:22:57.689832 env[1586]: time="2025-09-06T01:22:57.689687555Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5ff95b7253718709f7058e0fc2bf64384c91a266951c5b7568c976fb5d1ca580 pid=4595 runtime=io.containerd.runc.v2 Sep 6 01:22:57.746373 env[1586]: time="2025-09-06T01:22:57.746328304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56cb94fc6-8w5rf,Uid:f5d38221-ee04-42c6-b763-9c6f6f204114,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"5ff95b7253718709f7058e0fc2bf64384c91a266951c5b7568c976fb5d1ca580\"" Sep 6 01:22:57.765732 systemd-networkd[1753]: caliba7521ece98: Gained IPv6LL Sep 6 01:22:57.962508 env[1586]: time="2025-09-06T01:22:57.962455378Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:22:57.988031 env[1586]: time="2025-09-06T01:22:57.987991911Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:22:57.994515 env[1586]: time="2025-09-06T01:22:57.994466515Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:22:58.001086 env[1586]: time="2025-09-06T01:22:58.001038678Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:22:58.001381 env[1586]: time="2025-09-06T01:22:58.001351318Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 6 01:22:58.003969 env[1586]: time="2025-09-06T01:22:58.003937960Z" level=info msg="CreateContainer within sandbox \"eac5f78f7da79a935ba3f5428f8339e0d7b746dae03c537582045dede2c932f8\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 6 01:22:58.005062 env[1586]: time="2025-09-06T01:22:58.005038520Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 6 01:22:58.073706 env[1586]: time="2025-09-06T01:22:58.073649516Z" level=info msg="CreateContainer within sandbox \"eac5f78f7da79a935ba3f5428f8339e0d7b746dae03c537582045dede2c932f8\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"29c56b99452363c2d79cb379af5f35b54dcf898db0324514fab04ee6d20b3e83\"" Sep 6 01:22:58.074688 env[1586]: time="2025-09-06T01:22:58.074654917Z" level=info msg="StartContainer for \"29c56b99452363c2d79cb379af5f35b54dcf898db0324514fab04ee6d20b3e83\"" Sep 6 01:22:58.162411 env[1586]: time="2025-09-06T01:22:58.162350122Z" level=info msg="StartContainer for \"29c56b99452363c2d79cb379af5f35b54dcf898db0324514fab04ee6d20b3e83\" returns successfully" Sep 6 01:22:58.278560 env[1586]: time="2025-09-06T01:22:58.278457623Z" level=info msg="StopPodSandbox for \"d8c4b477ff7552087a7528ea03e1762470b021e831c99971fa8d2a4391cdac6f\"" Sep 6 01:22:58.385336 env[1586]: 2025-09-06 01:22:58.341 [INFO][4677] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d8c4b477ff7552087a7528ea03e1762470b021e831c99971fa8d2a4391cdac6f" Sep 6 01:22:58.385336 env[1586]: 2025-09-06 01:22:58.341 [INFO][4677] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d8c4b477ff7552087a7528ea03e1762470b021e831c99971fa8d2a4391cdac6f" iface="eth0" netns="/var/run/netns/cni-1643208d-4a32-2d83-a476-97eba0fb32f1" Sep 6 01:22:58.385336 env[1586]: 2025-09-06 01:22:58.342 [INFO][4677] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d8c4b477ff7552087a7528ea03e1762470b021e831c99971fa8d2a4391cdac6f" iface="eth0" netns="/var/run/netns/cni-1643208d-4a32-2d83-a476-97eba0fb32f1" Sep 6 01:22:58.385336 env[1586]: 2025-09-06 01:22:58.342 [INFO][4677] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d8c4b477ff7552087a7528ea03e1762470b021e831c99971fa8d2a4391cdac6f" iface="eth0" netns="/var/run/netns/cni-1643208d-4a32-2d83-a476-97eba0fb32f1" Sep 6 01:22:58.385336 env[1586]: 2025-09-06 01:22:58.342 [INFO][4677] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d8c4b477ff7552087a7528ea03e1762470b021e831c99971fa8d2a4391cdac6f" Sep 6 01:22:58.385336 env[1586]: 2025-09-06 01:22:58.342 [INFO][4677] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d8c4b477ff7552087a7528ea03e1762470b021e831c99971fa8d2a4391cdac6f" Sep 6 01:22:58.385336 env[1586]: 2025-09-06 01:22:58.370 [INFO][4684] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d8c4b477ff7552087a7528ea03e1762470b021e831c99971fa8d2a4391cdac6f" HandleID="k8s-pod-network.d8c4b477ff7552087a7528ea03e1762470b021e831c99971fa8d2a4391cdac6f" Workload="ci--3510.3.8--n--34c19deec5-k8s-coredns--7c65d6cfc9--5pdmz-eth0" Sep 6 01:22:58.385336 env[1586]: 2025-09-06 01:22:58.370 [INFO][4684] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 01:22:58.385336 env[1586]: 2025-09-06 01:22:58.370 [INFO][4684] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 01:22:58.385336 env[1586]: 2025-09-06 01:22:58.380 [WARNING][4684] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d8c4b477ff7552087a7528ea03e1762470b021e831c99971fa8d2a4391cdac6f" HandleID="k8s-pod-network.d8c4b477ff7552087a7528ea03e1762470b021e831c99971fa8d2a4391cdac6f" Workload="ci--3510.3.8--n--34c19deec5-k8s-coredns--7c65d6cfc9--5pdmz-eth0" Sep 6 01:22:58.385336 env[1586]: 2025-09-06 01:22:58.380 [INFO][4684] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d8c4b477ff7552087a7528ea03e1762470b021e831c99971fa8d2a4391cdac6f" HandleID="k8s-pod-network.d8c4b477ff7552087a7528ea03e1762470b021e831c99971fa8d2a4391cdac6f" Workload="ci--3510.3.8--n--34c19deec5-k8s-coredns--7c65d6cfc9--5pdmz-eth0" Sep 6 01:22:58.385336 env[1586]: 2025-09-06 01:22:58.381 [INFO][4684] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 01:22:58.385336 env[1586]: 2025-09-06 01:22:58.382 [INFO][4677] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d8c4b477ff7552087a7528ea03e1762470b021e831c99971fa8d2a4391cdac6f" Sep 6 01:22:58.392052 env[1586]: time="2025-09-06T01:22:58.388200040Z" level=info msg="TearDown network for sandbox \"d8c4b477ff7552087a7528ea03e1762470b021e831c99971fa8d2a4391cdac6f\" successfully" Sep 6 01:22:58.392052 env[1586]: time="2025-09-06T01:22:58.388291520Z" level=info msg="StopPodSandbox for \"d8c4b477ff7552087a7528ea03e1762470b021e831c99971fa8d2a4391cdac6f\" returns successfully" Sep 6 01:22:58.392052 env[1586]: time="2025-09-06T01:22:58.389035120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-5pdmz,Uid:a84708b2-35a4-42bb-8dac-86ea0f5ddee1,Namespace:kube-system,Attempt:1,}" Sep 6 01:22:58.390552 systemd[1]: run-netns-cni\x2d1643208d\x2d4a32\x2d2d83\x2da476\x2d97eba0fb32f1.mount: Deactivated successfully. Sep 6 01:22:58.591012 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calie8b34c027fc: link becomes ready Sep 6 01:22:58.590803 systemd-networkd[1753]: calie8b34c027fc: Link UP Sep 6 01:22:58.590923 systemd-networkd[1753]: calie8b34c027fc: Gained carrier Sep 6 01:22:58.611902 kubelet[2729]: I0906 01:22:58.611157 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-56cb94fc6-989l7" podStartSLOduration=28.444003711 podStartE2EDuration="31.611136316s" podCreationTimestamp="2025-09-06 01:22:27 +0000 UTC" firstStartedPulling="2025-09-06 01:22:54.835463994 +0000 UTC m=+44.723568204" lastFinishedPulling="2025-09-06 01:22:58.002596599 +0000 UTC m=+47.890700809" observedRunningTime="2025-09-06 01:22:58.59935559 +0000 UTC m=+48.487459800" watchObservedRunningTime="2025-09-06 01:22:58.611136316 +0000 UTC m=+48.499240526" Sep 6 01:22:58.615441 env[1586]: 2025-09-06 01:22:58.488 [INFO][4690] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--34c19deec5-k8s-coredns--7c65d6cfc9--5pdmz-eth0 coredns-7c65d6cfc9- kube-system a84708b2-35a4-42bb-8dac-86ea0f5ddee1 976 0 2025-09-06 01:22:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510.3.8-n-34c19deec5 coredns-7c65d6cfc9-5pdmz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie8b34c027fc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ccdc294e4a7e76e49c7b30ac6ed0e03f0452ca56f01b966a366c99793249fb79" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5pdmz" WorkloadEndpoint="ci--3510.3.8--n--34c19deec5-k8s-coredns--7c65d6cfc9--5pdmz-" Sep 6 01:22:58.615441 env[1586]: 2025-09-06 01:22:58.488 [INFO][4690] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ccdc294e4a7e76e49c7b30ac6ed0e03f0452ca56f01b966a366c99793249fb79" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5pdmz" WorkloadEndpoint="ci--3510.3.8--n--34c19deec5-k8s-coredns--7c65d6cfc9--5pdmz-eth0" Sep 6 01:22:58.615441 env[1586]: 2025-09-06 01:22:58.512 [INFO][4702] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ccdc294e4a7e76e49c7b30ac6ed0e03f0452ca56f01b966a366c99793249fb79" HandleID="k8s-pod-network.ccdc294e4a7e76e49c7b30ac6ed0e03f0452ca56f01b966a366c99793249fb79" Workload="ci--3510.3.8--n--34c19deec5-k8s-coredns--7c65d6cfc9--5pdmz-eth0" Sep 6 01:22:58.615441 env[1586]: 2025-09-06 01:22:58.512 [INFO][4702] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ccdc294e4a7e76e49c7b30ac6ed0e03f0452ca56f01b966a366c99793249fb79" HandleID="k8s-pod-network.ccdc294e4a7e76e49c7b30ac6ed0e03f0452ca56f01b966a366c99793249fb79" Workload="ci--3510.3.8--n--34c19deec5-k8s-coredns--7c65d6cfc9--5pdmz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3040), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510.3.8-n-34c19deec5", "pod":"coredns-7c65d6cfc9-5pdmz", "timestamp":"2025-09-06 01:22:58.512521584 +0000 UTC"}, Hostname:"ci-3510.3.8-n-34c19deec5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 6 01:22:58.615441 env[1586]: 2025-09-06 01:22:58.512 [INFO][4702] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 01:22:58.615441 env[1586]: 2025-09-06 01:22:58.512 [INFO][4702] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 01:22:58.615441 env[1586]: 2025-09-06 01:22:58.512 [INFO][4702] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-34c19deec5' Sep 6 01:22:58.615441 env[1586]: 2025-09-06 01:22:58.522 [INFO][4702] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ccdc294e4a7e76e49c7b30ac6ed0e03f0452ca56f01b966a366c99793249fb79" host="ci-3510.3.8-n-34c19deec5" Sep 6 01:22:58.615441 env[1586]: 2025-09-06 01:22:58.527 [INFO][4702] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-34c19deec5" Sep 6 01:22:58.615441 env[1586]: 2025-09-06 01:22:58.531 [INFO][4702] ipam/ipam.go 511: Trying affinity for 192.168.61.64/26 host="ci-3510.3.8-n-34c19deec5" Sep 6 01:22:58.615441 env[1586]: 2025-09-06 01:22:58.533 [INFO][4702] ipam/ipam.go 158: Attempting to load block cidr=192.168.61.64/26 host="ci-3510.3.8-n-34c19deec5" Sep 6 01:22:58.615441 env[1586]: 2025-09-06 01:22:58.535 [INFO][4702] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.61.64/26 host="ci-3510.3.8-n-34c19deec5" Sep 6 01:22:58.615441 env[1586]: 2025-09-06 01:22:58.535 [INFO][4702] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.61.64/26 handle="k8s-pod-network.ccdc294e4a7e76e49c7b30ac6ed0e03f0452ca56f01b966a366c99793249fb79" host="ci-3510.3.8-n-34c19deec5" Sep 6 01:22:58.615441 env[1586]: 2025-09-06 01:22:58.536 [INFO][4702] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ccdc294e4a7e76e49c7b30ac6ed0e03f0452ca56f01b966a366c99793249fb79 Sep 6 01:22:58.615441 env[1586]: 2025-09-06 01:22:58.541 [INFO][4702] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.61.64/26 handle="k8s-pod-network.ccdc294e4a7e76e49c7b30ac6ed0e03f0452ca56f01b966a366c99793249fb79" host="ci-3510.3.8-n-34c19deec5" Sep 6 01:22:58.615441 env[1586]: 2025-09-06 01:22:58.572 [INFO][4702] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.61.70/26] block=192.168.61.64/26 handle="k8s-pod-network.ccdc294e4a7e76e49c7b30ac6ed0e03f0452ca56f01b966a366c99793249fb79" host="ci-3510.3.8-n-34c19deec5" Sep 6 01:22:58.615441 env[1586]: 2025-09-06 01:22:58.572 [INFO][4702] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.61.70/26] handle="k8s-pod-network.ccdc294e4a7e76e49c7b30ac6ed0e03f0452ca56f01b966a366c99793249fb79" host="ci-3510.3.8-n-34c19deec5" Sep 6 01:22:58.615441 env[1586]: 2025-09-06 01:22:58.572 [INFO][4702] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 01:22:58.615441 env[1586]: 2025-09-06 01:22:58.572 [INFO][4702] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.61.70/26] IPv6=[] ContainerID="ccdc294e4a7e76e49c7b30ac6ed0e03f0452ca56f01b966a366c99793249fb79" HandleID="k8s-pod-network.ccdc294e4a7e76e49c7b30ac6ed0e03f0452ca56f01b966a366c99793249fb79" Workload="ci--3510.3.8--n--34c19deec5-k8s-coredns--7c65d6cfc9--5pdmz-eth0" Sep 6 01:22:58.616193 env[1586]: 2025-09-06 01:22:58.574 [INFO][4690] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ccdc294e4a7e76e49c7b30ac6ed0e03f0452ca56f01b966a366c99793249fb79" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5pdmz" WorkloadEndpoint="ci--3510.3.8--n--34c19deec5-k8s-coredns--7c65d6cfc9--5pdmz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--34c19deec5-k8s-coredns--7c65d6cfc9--5pdmz-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a84708b2-35a4-42bb-8dac-86ea0f5ddee1", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 1, 22, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-34c19deec5", ContainerID:"", Pod:"coredns-7c65d6cfc9-5pdmz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie8b34c027fc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 01:22:58.616193 env[1586]: 2025-09-06 01:22:58.574 [INFO][4690] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.61.70/32] ContainerID="ccdc294e4a7e76e49c7b30ac6ed0e03f0452ca56f01b966a366c99793249fb79" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5pdmz" WorkloadEndpoint="ci--3510.3.8--n--34c19deec5-k8s-coredns--7c65d6cfc9--5pdmz-eth0" Sep 6 01:22:58.616193 env[1586]: 2025-09-06 01:22:58.574 [INFO][4690] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie8b34c027fc ContainerID="ccdc294e4a7e76e49c7b30ac6ed0e03f0452ca56f01b966a366c99793249fb79" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5pdmz" WorkloadEndpoint="ci--3510.3.8--n--34c19deec5-k8s-coredns--7c65d6cfc9--5pdmz-eth0" Sep 6 01:22:58.616193 env[1586]: 2025-09-06 01:22:58.579 [INFO][4690] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ccdc294e4a7e76e49c7b30ac6ed0e03f0452ca56f01b966a366c99793249fb79" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5pdmz" WorkloadEndpoint="ci--3510.3.8--n--34c19deec5-k8s-coredns--7c65d6cfc9--5pdmz-eth0" Sep 6 01:22:58.616193 env[1586]: 2025-09-06 01:22:58.590 [INFO][4690] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ccdc294e4a7e76e49c7b30ac6ed0e03f0452ca56f01b966a366c99793249fb79" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5pdmz" WorkloadEndpoint="ci--3510.3.8--n--34c19deec5-k8s-coredns--7c65d6cfc9--5pdmz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--34c19deec5-k8s-coredns--7c65d6cfc9--5pdmz-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a84708b2-35a4-42bb-8dac-86ea0f5ddee1", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 1, 22, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-34c19deec5", ContainerID:"ccdc294e4a7e76e49c7b30ac6ed0e03f0452ca56f01b966a366c99793249fb79", Pod:"coredns-7c65d6cfc9-5pdmz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie8b34c027fc", MAC:"6a:40:1e:62:e0:2d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 01:22:58.616193 env[1586]: 2025-09-06 01:22:58.612 [INFO][4690] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ccdc294e4a7e76e49c7b30ac6ed0e03f0452ca56f01b966a366c99793249fb79" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5pdmz" WorkloadEndpoint="ci--3510.3.8--n--34c19deec5-k8s-coredns--7c65d6cfc9--5pdmz-eth0" Sep 6 01:22:58.631000 audit[4718]: NETFILTER_CFG table=filter:114 family=2 entries=18 op=nft_register_rule pid=4718 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:22:58.631000 audit[4718]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffc41a94c0 a2=0 a3=1 items=0 ppid=2834 pid=4718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:58.631000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:22:58.637028 env[1586]: time="2025-09-06T01:22:58.636666009Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:22:58.637028 env[1586]: time="2025-09-06T01:22:58.636707969Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:22:58.637028 env[1586]: time="2025-09-06T01:22:58.636718049Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:22:58.637028 env[1586]: time="2025-09-06T01:22:58.636971929Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ccdc294e4a7e76e49c7b30ac6ed0e03f0452ca56f01b966a366c99793249fb79 pid=4726 runtime=io.containerd.runc.v2 Sep 6 01:22:58.642000 audit[4718]: NETFILTER_CFG table=nat:115 family=2 entries=16 op=nft_register_rule pid=4718 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:22:58.642000 audit[4718]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4236 a0=3 a1=ffffc41a94c0 a2=0 a3=1 items=0 ppid=2834 pid=4718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:58.642000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:22:58.658000 audit[4743]: NETFILTER_CFG table=filter:116 family=2 entries=64 op=nft_register_chain pid=4743 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 6 01:22:58.658000 audit[4743]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=30156 a0=3 a1=ffffc9815e30 a2=0 a3=ffffa715cfa8 items=0 ppid=3942 pid=4743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:58.658000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 6 01:22:58.707878 env[1586]: time="2025-09-06T01:22:58.707829446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-5pdmz,Uid:a84708b2-35a4-42bb-8dac-86ea0f5ddee1,Namespace:kube-system,Attempt:1,} returns sandbox id \"ccdc294e4a7e76e49c7b30ac6ed0e03f0452ca56f01b966a366c99793249fb79\"" Sep 6 01:22:58.711280 env[1586]: time="2025-09-06T01:22:58.710804528Z" level=info msg="CreateContainer within sandbox \"ccdc294e4a7e76e49c7b30ac6ed0e03f0452ca56f01b966a366c99793249fb79\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 6 01:22:58.725826 systemd-networkd[1753]: calie261c37c714: Gained IPv6LL Sep 6 01:22:58.797119 env[1586]: time="2025-09-06T01:22:58.797068292Z" level=info msg="CreateContainer within sandbox \"ccdc294e4a7e76e49c7b30ac6ed0e03f0452ca56f01b966a366c99793249fb79\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7c9940d3694ee79e954a2057013fbae5dce8fa9710dc4a230d20abab2159d4e9\"" Sep 6 01:22:58.798044 env[1586]: time="2025-09-06T01:22:58.798016093Z" level=info msg="StartContainer for \"7c9940d3694ee79e954a2057013fbae5dce8fa9710dc4a230d20abab2159d4e9\"" Sep 6 01:22:58.869846 env[1586]: time="2025-09-06T01:22:58.869736210Z" level=info msg="StartContainer for \"7c9940d3694ee79e954a2057013fbae5dce8fa9710dc4a230d20abab2159d4e9\" returns successfully" Sep 6 01:22:59.274899 env[1586]: time="2025-09-06T01:22:59.274865420Z" level=info msg="StopPodSandbox for \"d6dd37adf893ea15f62d8619dd0427b888e8647e5f30926a70b3a493561fd1a1\"" Sep 6 01:22:59.289713 env[1586]: time="2025-09-06T01:22:59.289672307Z" level=info msg="StopPodSandbox for \"f7907bc0909e4c74f245d687f3dff3e8ae6fb2038ccd21518dfaab8b7a11cbbd\"" Sep 6 01:22:59.489348 env[1586]: 2025-09-06 01:22:59.431 [INFO][4822] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f7907bc0909e4c74f245d687f3dff3e8ae6fb2038ccd21518dfaab8b7a11cbbd" Sep 6 01:22:59.489348 env[1586]: 2025-09-06 01:22:59.432 [INFO][4822] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f7907bc0909e4c74f245d687f3dff3e8ae6fb2038ccd21518dfaab8b7a11cbbd" iface="eth0" netns="/var/run/netns/cni-1f5efcc1-bd52-b679-e2b9-3298bd33f86e" Sep 6 01:22:59.489348 env[1586]: 2025-09-06 01:22:59.432 [INFO][4822] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f7907bc0909e4c74f245d687f3dff3e8ae6fb2038ccd21518dfaab8b7a11cbbd" iface="eth0" netns="/var/run/netns/cni-1f5efcc1-bd52-b679-e2b9-3298bd33f86e" Sep 6 01:22:59.489348 env[1586]: 2025-09-06 01:22:59.432 [INFO][4822] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f7907bc0909e4c74f245d687f3dff3e8ae6fb2038ccd21518dfaab8b7a11cbbd" iface="eth0" netns="/var/run/netns/cni-1f5efcc1-bd52-b679-e2b9-3298bd33f86e" Sep 6 01:22:59.489348 env[1586]: 2025-09-06 01:22:59.432 [INFO][4822] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f7907bc0909e4c74f245d687f3dff3e8ae6fb2038ccd21518dfaab8b7a11cbbd" Sep 6 01:22:59.489348 env[1586]: 2025-09-06 01:22:59.432 [INFO][4822] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f7907bc0909e4c74f245d687f3dff3e8ae6fb2038ccd21518dfaab8b7a11cbbd" Sep 6 01:22:59.489348 env[1586]: 2025-09-06 01:22:59.474 [INFO][4831] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f7907bc0909e4c74f245d687f3dff3e8ae6fb2038ccd21518dfaab8b7a11cbbd" HandleID="k8s-pod-network.f7907bc0909e4c74f245d687f3dff3e8ae6fb2038ccd21518dfaab8b7a11cbbd" Workload="ci--3510.3.8--n--34c19deec5-k8s-calico--kube--controllers--9fd44b5cd--ggmhh-eth0" Sep 6 01:22:59.489348 env[1586]: 2025-09-06 01:22:59.476 [INFO][4831] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 01:22:59.489348 env[1586]: 2025-09-06 01:22:59.476 [INFO][4831] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 01:22:59.489348 env[1586]: 2025-09-06 01:22:59.486 [WARNING][4831] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f7907bc0909e4c74f245d687f3dff3e8ae6fb2038ccd21518dfaab8b7a11cbbd" HandleID="k8s-pod-network.f7907bc0909e4c74f245d687f3dff3e8ae6fb2038ccd21518dfaab8b7a11cbbd" Workload="ci--3510.3.8--n--34c19deec5-k8s-calico--kube--controllers--9fd44b5cd--ggmhh-eth0" Sep 6 01:22:59.489348 env[1586]: 2025-09-06 01:22:59.486 [INFO][4831] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f7907bc0909e4c74f245d687f3dff3e8ae6fb2038ccd21518dfaab8b7a11cbbd" HandleID="k8s-pod-network.f7907bc0909e4c74f245d687f3dff3e8ae6fb2038ccd21518dfaab8b7a11cbbd" Workload="ci--3510.3.8--n--34c19deec5-k8s-calico--kube--controllers--9fd44b5cd--ggmhh-eth0" Sep 6 01:22:59.489348 env[1586]: 2025-09-06 01:22:59.487 [INFO][4831] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 01:22:59.489348 env[1586]: 2025-09-06 01:22:59.488 [INFO][4822] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f7907bc0909e4c74f245d687f3dff3e8ae6fb2038ccd21518dfaab8b7a11cbbd" Sep 6 01:22:59.492511 systemd[1]: run-netns-cni\x2d1f5efcc1\x2dbd52\x2db679\x2de2b9\x2d3298bd33f86e.mount: Deactivated successfully. Sep 6 01:22:59.493071 env[1586]: time="2025-09-06T01:22:59.493033292Z" level=info msg="TearDown network for sandbox \"f7907bc0909e4c74f245d687f3dff3e8ae6fb2038ccd21518dfaab8b7a11cbbd\" successfully" Sep 6 01:22:59.493166 env[1586]: time="2025-09-06T01:22:59.493149652Z" level=info msg="StopPodSandbox for \"f7907bc0909e4c74f245d687f3dff3e8ae6fb2038ccd21518dfaab8b7a11cbbd\" returns successfully" Sep 6 01:22:59.493920 env[1586]: time="2025-09-06T01:22:59.493892132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9fd44b5cd-ggmhh,Uid:f4c05567-6639-4b0f-94fa-e71542024f21,Namespace:calico-system,Attempt:1,}" Sep 6 01:22:59.556073 kubelet[2729]: I0906 01:22:59.554404 2729 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 6 01:22:59.581000 audit[4838]: NETFILTER_CFG table=filter:117 family=2 entries=18 op=nft_register_rule pid=4838 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:22:59.581000 audit[4838]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffc4633ee0 a2=0 a3=1 items=0 ppid=2834 pid=4838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:59.581000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:22:59.588000 audit[4838]: NETFILTER_CFG table=nat:118 family=2 entries=16 op=nft_register_rule pid=4838 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:22:59.588000 audit[4838]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4236 a0=3 a1=ffffc4633ee0 a2=0 a3=1 items=0 ppid=2834 pid=4838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:59.588000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:22:59.601806 kubelet[2729]: I0906 01:22:59.601748 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-5pdmz" podStartSLOduration=43.601729628 podStartE2EDuration="43.601729628s" podCreationTimestamp="2025-09-06 01:22:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 01:22:59.572356973 +0000 UTC m=+49.460461183" watchObservedRunningTime="2025-09-06 01:22:59.601729628 +0000 UTC m=+49.489833838" Sep 6 01:22:59.642000 audit[4845]: NETFILTER_CFG table=filter:119 family=2 entries=15 op=nft_register_rule pid=4845 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:22:59.642000 audit[4845]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=fffffc110670 a2=0 a3=1 items=0 ppid=2834 pid=4845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:59.642000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:22:59.648000 audit[4845]: NETFILTER_CFG table=nat:120 family=2 entries=37 op=nft_register_chain pid=4845 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:22:59.648000 audit[4845]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14964 a0=3 a1=fffffc110670 a2=0 a3=1 items=0 ppid=2834 pid=4845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:59.648000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:22:59.669712 env[1586]: 2025-09-06 01:22:59.609 [INFO][4818] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d6dd37adf893ea15f62d8619dd0427b888e8647e5f30926a70b3a493561fd1a1" Sep 6 01:22:59.669712 env[1586]: 2025-09-06 01:22:59.610 [INFO][4818] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d6dd37adf893ea15f62d8619dd0427b888e8647e5f30926a70b3a493561fd1a1" iface="eth0" netns="/var/run/netns/cni-e2283dca-ad3c-c0e5-896e-1c17f3ea3692" Sep 6 01:22:59.669712 env[1586]: 2025-09-06 01:22:59.610 [INFO][4818] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d6dd37adf893ea15f62d8619dd0427b888e8647e5f30926a70b3a493561fd1a1" iface="eth0" netns="/var/run/netns/cni-e2283dca-ad3c-c0e5-896e-1c17f3ea3692" Sep 6 01:22:59.669712 env[1586]: 2025-09-06 01:22:59.610 [INFO][4818] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d6dd37adf893ea15f62d8619dd0427b888e8647e5f30926a70b3a493561fd1a1" iface="eth0" netns="/var/run/netns/cni-e2283dca-ad3c-c0e5-896e-1c17f3ea3692" Sep 6 01:22:59.669712 env[1586]: 2025-09-06 01:22:59.610 [INFO][4818] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d6dd37adf893ea15f62d8619dd0427b888e8647e5f30926a70b3a493561fd1a1" Sep 6 01:22:59.669712 env[1586]: 2025-09-06 01:22:59.610 [INFO][4818] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d6dd37adf893ea15f62d8619dd0427b888e8647e5f30926a70b3a493561fd1a1" Sep 6 01:22:59.669712 env[1586]: 2025-09-06 01:22:59.654 [INFO][4840] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d6dd37adf893ea15f62d8619dd0427b888e8647e5f30926a70b3a493561fd1a1" HandleID="k8s-pod-network.d6dd37adf893ea15f62d8619dd0427b888e8647e5f30926a70b3a493561fd1a1" Workload="ci--3510.3.8--n--34c19deec5-k8s-coredns--7c65d6cfc9--d2dkt-eth0" Sep 6 01:22:59.669712 env[1586]: 2025-09-06 01:22:59.654 [INFO][4840] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 01:22:59.669712 env[1586]: 2025-09-06 01:22:59.654 [INFO][4840] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 01:22:59.669712 env[1586]: 2025-09-06 01:22:59.665 [WARNING][4840] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d6dd37adf893ea15f62d8619dd0427b888e8647e5f30926a70b3a493561fd1a1" HandleID="k8s-pod-network.d6dd37adf893ea15f62d8619dd0427b888e8647e5f30926a70b3a493561fd1a1" Workload="ci--3510.3.8--n--34c19deec5-k8s-coredns--7c65d6cfc9--d2dkt-eth0" Sep 6 01:22:59.669712 env[1586]: 2025-09-06 01:22:59.665 [INFO][4840] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d6dd37adf893ea15f62d8619dd0427b888e8647e5f30926a70b3a493561fd1a1" HandleID="k8s-pod-network.d6dd37adf893ea15f62d8619dd0427b888e8647e5f30926a70b3a493561fd1a1" Workload="ci--3510.3.8--n--34c19deec5-k8s-coredns--7c65d6cfc9--d2dkt-eth0" Sep 6 01:22:59.669712 env[1586]: 2025-09-06 01:22:59.667 [INFO][4840] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 01:22:59.669712 env[1586]: 2025-09-06 01:22:59.668 [INFO][4818] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d6dd37adf893ea15f62d8619dd0427b888e8647e5f30926a70b3a493561fd1a1" Sep 6 01:22:59.673398 systemd[1]: run-netns-cni\x2de2283dca\x2dad3c\x2dc0e5\x2d896e\x2d1c17f3ea3692.mount: Deactivated successfully. Sep 6 01:22:59.674629 env[1586]: time="2025-09-06T01:22:59.674591905Z" level=info msg="TearDown network for sandbox \"d6dd37adf893ea15f62d8619dd0427b888e8647e5f30926a70b3a493561fd1a1\" successfully" Sep 6 01:22:59.674715 env[1586]: time="2025-09-06T01:22:59.674698986Z" level=info msg="StopPodSandbox for \"d6dd37adf893ea15f62d8619dd0427b888e8647e5f30926a70b3a493561fd1a1\" returns successfully" Sep 6 01:22:59.675818 env[1586]: time="2025-09-06T01:22:59.675794186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-d2dkt,Uid:7e1ef8a5-5849-44eb-8c06-2ea19305f74d,Namespace:kube-system,Attempt:1,}" Sep 6 01:22:59.749821 systemd-networkd[1753]: calie8b34c027fc: Gained IPv6LL Sep 6 01:23:00.151505 kubelet[2729]: I0906 01:23:00.151007 2729 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 6 01:23:00.174447 systemd[1]: run-containerd-runc-k8s.io-31c28442c4258c80440ede4c427d6dcda161e5b05d2e882f5c1546afc6c909cb-runc.jhy7C4.mount: Deactivated successfully. Sep 6 01:23:00.211534 env[1586]: time="2025-09-06T01:23:00.211495741Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:23:00.263802 env[1586]: time="2025-09-06T01:23:00.263751728Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:23:00.290959 env[1586]: time="2025-09-06T01:23:00.290914141Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:23:00.307281 env[1586]: time="2025-09-06T01:23:00.302723388Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:23:00.307281 env[1586]: time="2025-09-06T01:23:00.302977828Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 6 01:23:00.311328 env[1586]: time="2025-09-06T01:23:00.311264632Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 6 01:23:00.314480 env[1586]: time="2025-09-06T01:23:00.314451513Z" level=info msg="CreateContainer within sandbox \"2218e28adb1761d6e0216ffabd2b8b9ac70c51b19e318aeda2a3debb31a235cc\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 6 01:23:00.418012 systemd[1]: run-containerd-runc-k8s.io-31c28442c4258c80440ede4c427d6dcda161e5b05d2e882f5c1546afc6c909cb-runc.rXNOYw.mount: Deactivated successfully. Sep 6 01:23:00.423267 env[1586]: time="2025-09-06T01:23:00.423048689Z" level=info msg="CreateContainer within sandbox \"2218e28adb1761d6e0216ffabd2b8b9ac70c51b19e318aeda2a3debb31a235cc\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"cca70ce4b51f362b460bb04ac252cbd8058b846ea7bbb6b12dcd6cebcf991c90\"" Sep 6 01:23:00.426323 env[1586]: time="2025-09-06T01:23:00.425202090Z" level=info msg="StartContainer for \"cca70ce4b51f362b460bb04ac252cbd8058b846ea7bbb6b12dcd6cebcf991c90\"" Sep 6 01:23:00.525019 systemd-networkd[1753]: calib6453dd5dc0: Link UP Sep 6 01:23:00.537454 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 6 01:23:00.537578 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calib6453dd5dc0: link becomes ready Sep 6 01:23:00.542715 systemd-networkd[1753]: calib6453dd5dc0: Gained carrier Sep 6 01:23:00.567327 env[1586]: 2025-09-06 01:23:00.297 [INFO][4869] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--34c19deec5-k8s-calico--kube--controllers--9fd44b5cd--ggmhh-eth0 calico-kube-controllers-9fd44b5cd- calico-system f4c05567-6639-4b0f-94fa-e71542024f21 993 0 2025-09-06 01:22:32 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:9fd44b5cd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3510.3.8-n-34c19deec5 calico-kube-controllers-9fd44b5cd-ggmhh eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib6453dd5dc0 [] [] }} ContainerID="19c564915bda609549386b4e50512df3de3138779a03d7052fe61eff03eb6e22" Namespace="calico-system" Pod="calico-kube-controllers-9fd44b5cd-ggmhh" WorkloadEndpoint="ci--3510.3.8--n--34c19deec5-k8s-calico--kube--controllers--9fd44b5cd--ggmhh-" Sep 6 01:23:00.567327 env[1586]: 2025-09-06 01:23:00.298 [INFO][4869] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="19c564915bda609549386b4e50512df3de3138779a03d7052fe61eff03eb6e22" Namespace="calico-system" Pod="calico-kube-controllers-9fd44b5cd-ggmhh" WorkloadEndpoint="ci--3510.3.8--n--34c19deec5-k8s-calico--kube--controllers--9fd44b5cd--ggmhh-eth0" Sep 6 01:23:00.567327 env[1586]: 2025-09-06 01:23:00.423 [INFO][4894] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="19c564915bda609549386b4e50512df3de3138779a03d7052fe61eff03eb6e22" HandleID="k8s-pod-network.19c564915bda609549386b4e50512df3de3138779a03d7052fe61eff03eb6e22" Workload="ci--3510.3.8--n--34c19deec5-k8s-calico--kube--controllers--9fd44b5cd--ggmhh-eth0" Sep 6 01:23:00.567327 env[1586]: 2025-09-06 01:23:00.423 [INFO][4894] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="19c564915bda609549386b4e50512df3de3138779a03d7052fe61eff03eb6e22" HandleID="k8s-pod-network.19c564915bda609549386b4e50512df3de3138779a03d7052fe61eff03eb6e22" Workload="ci--3510.3.8--n--34c19deec5-k8s-calico--kube--controllers--9fd44b5cd--ggmhh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c6f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.8-n-34c19deec5", "pod":"calico-kube-controllers-9fd44b5cd-ggmhh", "timestamp":"2025-09-06 01:23:00.420871568 +0000 UTC"}, Hostname:"ci-3510.3.8-n-34c19deec5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 6 01:23:00.567327 env[1586]: 2025-09-06 01:23:00.423 [INFO][4894] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 01:23:00.567327 env[1586]: 2025-09-06 01:23:00.423 [INFO][4894] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 01:23:00.567327 env[1586]: 2025-09-06 01:23:00.423 [INFO][4894] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-34c19deec5' Sep 6 01:23:00.567327 env[1586]: 2025-09-06 01:23:00.441 [INFO][4894] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.19c564915bda609549386b4e50512df3de3138779a03d7052fe61eff03eb6e22" host="ci-3510.3.8-n-34c19deec5" Sep 6 01:23:00.567327 env[1586]: 2025-09-06 01:23:00.449 [INFO][4894] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-34c19deec5" Sep 6 01:23:00.567327 env[1586]: 2025-09-06 01:23:00.454 [INFO][4894] ipam/ipam.go 511: Trying affinity for 192.168.61.64/26 host="ci-3510.3.8-n-34c19deec5" Sep 6 01:23:00.567327 env[1586]: 2025-09-06 01:23:00.457 [INFO][4894] ipam/ipam.go 158: Attempting to load block cidr=192.168.61.64/26 host="ci-3510.3.8-n-34c19deec5" Sep 6 01:23:00.567327 env[1586]: 2025-09-06 01:23:00.464 [INFO][4894] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.61.64/26 host="ci-3510.3.8-n-34c19deec5" Sep 6 01:23:00.567327 env[1586]: 2025-09-06 01:23:00.464 [INFO][4894] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.61.64/26 handle="k8s-pod-network.19c564915bda609549386b4e50512df3de3138779a03d7052fe61eff03eb6e22" host="ci-3510.3.8-n-34c19deec5" Sep 6 01:23:00.567327 env[1586]: 2025-09-06 01:23:00.467 [INFO][4894] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.19c564915bda609549386b4e50512df3de3138779a03d7052fe61eff03eb6e22 Sep 6 01:23:00.567327 env[1586]: 2025-09-06 01:23:00.474 [INFO][4894] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.61.64/26 handle="k8s-pod-network.19c564915bda609549386b4e50512df3de3138779a03d7052fe61eff03eb6e22" host="ci-3510.3.8-n-34c19deec5" Sep 6 01:23:00.567327 env[1586]: 2025-09-06 01:23:00.496 [INFO][4894] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.61.71/26] block=192.168.61.64/26 handle="k8s-pod-network.19c564915bda609549386b4e50512df3de3138779a03d7052fe61eff03eb6e22" host="ci-3510.3.8-n-34c19deec5" Sep 6 01:23:00.567327 env[1586]: 2025-09-06 01:23:00.496 [INFO][4894] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.61.71/26] handle="k8s-pod-network.19c564915bda609549386b4e50512df3de3138779a03d7052fe61eff03eb6e22" host="ci-3510.3.8-n-34c19deec5" Sep 6 01:23:00.567327 env[1586]: 2025-09-06 01:23:00.496 [INFO][4894] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 01:23:00.567327 env[1586]: 2025-09-06 01:23:00.496 [INFO][4894] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.61.71/26] IPv6=[] ContainerID="19c564915bda609549386b4e50512df3de3138779a03d7052fe61eff03eb6e22" HandleID="k8s-pod-network.19c564915bda609549386b4e50512df3de3138779a03d7052fe61eff03eb6e22" Workload="ci--3510.3.8--n--34c19deec5-k8s-calico--kube--controllers--9fd44b5cd--ggmhh-eth0" Sep 6 01:23:00.567916 env[1586]: 2025-09-06 01:23:00.509 [INFO][4869] cni-plugin/k8s.go 418: Populated endpoint ContainerID="19c564915bda609549386b4e50512df3de3138779a03d7052fe61eff03eb6e22" Namespace="calico-system" Pod="calico-kube-controllers-9fd44b5cd-ggmhh" WorkloadEndpoint="ci--3510.3.8--n--34c19deec5-k8s-calico--kube--controllers--9fd44b5cd--ggmhh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--34c19deec5-k8s-calico--kube--controllers--9fd44b5cd--ggmhh-eth0", GenerateName:"calico-kube-controllers-9fd44b5cd-", Namespace:"calico-system", SelfLink:"", UID:"f4c05567-6639-4b0f-94fa-e71542024f21", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 1, 22, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9fd44b5cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-34c19deec5", ContainerID:"", Pod:"calico-kube-controllers-9fd44b5cd-ggmhh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.61.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib6453dd5dc0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 01:23:00.567916 env[1586]: 2025-09-06 01:23:00.509 [INFO][4869] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.61.71/32] ContainerID="19c564915bda609549386b4e50512df3de3138779a03d7052fe61eff03eb6e22" Namespace="calico-system" Pod="calico-kube-controllers-9fd44b5cd-ggmhh" WorkloadEndpoint="ci--3510.3.8--n--34c19deec5-k8s-calico--kube--controllers--9fd44b5cd--ggmhh-eth0" Sep 6 01:23:00.567916 env[1586]: 2025-09-06 01:23:00.509 [INFO][4869] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib6453dd5dc0 ContainerID="19c564915bda609549386b4e50512df3de3138779a03d7052fe61eff03eb6e22" Namespace="calico-system" Pod="calico-kube-controllers-9fd44b5cd-ggmhh" WorkloadEndpoint="ci--3510.3.8--n--34c19deec5-k8s-calico--kube--controllers--9fd44b5cd--ggmhh-eth0" Sep 6 01:23:00.567916 env[1586]: 2025-09-06 01:23:00.549 [INFO][4869] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="19c564915bda609549386b4e50512df3de3138779a03d7052fe61eff03eb6e22" Namespace="calico-system" Pod="calico-kube-controllers-9fd44b5cd-ggmhh" WorkloadEndpoint="ci--3510.3.8--n--34c19deec5-k8s-calico--kube--controllers--9fd44b5cd--ggmhh-eth0" Sep 6 01:23:00.567916 env[1586]: 2025-09-06 01:23:00.549 [INFO][4869] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="19c564915bda609549386b4e50512df3de3138779a03d7052fe61eff03eb6e22" Namespace="calico-system" Pod="calico-kube-controllers-9fd44b5cd-ggmhh" WorkloadEndpoint="ci--3510.3.8--n--34c19deec5-k8s-calico--kube--controllers--9fd44b5cd--ggmhh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--34c19deec5-k8s-calico--kube--controllers--9fd44b5cd--ggmhh-eth0", GenerateName:"calico-kube-controllers-9fd44b5cd-", Namespace:"calico-system", SelfLink:"", UID:"f4c05567-6639-4b0f-94fa-e71542024f21", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 1, 22, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9fd44b5cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-34c19deec5", ContainerID:"19c564915bda609549386b4e50512df3de3138779a03d7052fe61eff03eb6e22", Pod:"calico-kube-controllers-9fd44b5cd-ggmhh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.61.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib6453dd5dc0", MAC:"2e:d6:c1:6b:00:cb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 01:23:00.567916 env[1586]: 2025-09-06 01:23:00.565 [INFO][4869] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="19c564915bda609549386b4e50512df3de3138779a03d7052fe61eff03eb6e22" Namespace="calico-system" Pod="calico-kube-controllers-9fd44b5cd-ggmhh" WorkloadEndpoint="ci--3510.3.8--n--34c19deec5-k8s-calico--kube--controllers--9fd44b5cd--ggmhh-eth0" Sep 6 01:23:00.591345 env[1586]: time="2025-09-06T01:23:00.591095015Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:23:00.591345 env[1586]: time="2025-09-06T01:23:00.591130575Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:23:00.591345 env[1586]: time="2025-09-06T01:23:00.591140455Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:23:00.591532 env[1586]: time="2025-09-06T01:23:00.591400255Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/19c564915bda609549386b4e50512df3de3138779a03d7052fe61eff03eb6e22 pid=4969 runtime=io.containerd.runc.v2 Sep 6 01:23:00.602119 systemd-networkd[1753]: cali27cd9335711: Link UP Sep 6 01:23:00.612326 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali27cd9335711: link becomes ready Sep 6 01:23:00.611573 systemd-networkd[1753]: cali27cd9335711: Gained carrier Sep 6 01:23:00.634028 env[1586]: 2025-09-06 01:23:00.332 [INFO][4880] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--34c19deec5-k8s-coredns--7c65d6cfc9--d2dkt-eth0 coredns-7c65d6cfc9- kube-system 7e1ef8a5-5849-44eb-8c06-2ea19305f74d 998 0 2025-09-06 01:22:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510.3.8-n-34c19deec5 coredns-7c65d6cfc9-d2dkt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali27cd9335711 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="2a8b575adb245716a57ea94547d7329748e231fc05c33b65dc72fd52df28f595" Namespace="kube-system" Pod="coredns-7c65d6cfc9-d2dkt" WorkloadEndpoint="ci--3510.3.8--n--34c19deec5-k8s-coredns--7c65d6cfc9--d2dkt-" Sep 6 01:23:00.634028 env[1586]: 2025-09-06 01:23:00.332 [INFO][4880] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2a8b575adb245716a57ea94547d7329748e231fc05c33b65dc72fd52df28f595" Namespace="kube-system" Pod="coredns-7c65d6cfc9-d2dkt" WorkloadEndpoint="ci--3510.3.8--n--34c19deec5-k8s-coredns--7c65d6cfc9--d2dkt-eth0" Sep 6 01:23:00.634028 env[1586]: 2025-09-06 01:23:00.475 [INFO][4902] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2a8b575adb245716a57ea94547d7329748e231fc05c33b65dc72fd52df28f595" HandleID="k8s-pod-network.2a8b575adb245716a57ea94547d7329748e231fc05c33b65dc72fd52df28f595" Workload="ci--3510.3.8--n--34c19deec5-k8s-coredns--7c65d6cfc9--d2dkt-eth0" Sep 6 01:23:00.634028 env[1586]: 2025-09-06 01:23:00.475 [INFO][4902] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2a8b575adb245716a57ea94547d7329748e231fc05c33b65dc72fd52df28f595" HandleID="k8s-pod-network.2a8b575adb245716a57ea94547d7329748e231fc05c33b65dc72fd52df28f595" Workload="ci--3510.3.8--n--34c19deec5-k8s-coredns--7c65d6cfc9--d2dkt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002aa510), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510.3.8-n-34c19deec5", "pod":"coredns-7c65d6cfc9-d2dkt", "timestamp":"2025-09-06 01:23:00.475732396 +0000 UTC"}, Hostname:"ci-3510.3.8-n-34c19deec5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 6 01:23:00.634028 env[1586]: 2025-09-06 01:23:00.476 [INFO][4902] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 01:23:00.634028 env[1586]: 2025-09-06 01:23:00.496 [INFO][4902] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 01:23:00.634028 env[1586]: 2025-09-06 01:23:00.496 [INFO][4902] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-34c19deec5' Sep 6 01:23:00.634028 env[1586]: 2025-09-06 01:23:00.539 [INFO][4902] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2a8b575adb245716a57ea94547d7329748e231fc05c33b65dc72fd52df28f595" host="ci-3510.3.8-n-34c19deec5" Sep 6 01:23:00.634028 env[1586]: 2025-09-06 01:23:00.550 [INFO][4902] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-34c19deec5" Sep 6 01:23:00.634028 env[1586]: 2025-09-06 01:23:00.568 [INFO][4902] ipam/ipam.go 511: Trying affinity for 192.168.61.64/26 host="ci-3510.3.8-n-34c19deec5" Sep 6 01:23:00.634028 env[1586]: 2025-09-06 01:23:00.570 [INFO][4902] ipam/ipam.go 158: Attempting to load block cidr=192.168.61.64/26 host="ci-3510.3.8-n-34c19deec5" Sep 6 01:23:00.634028 env[1586]: 2025-09-06 01:23:00.573 [INFO][4902] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.61.64/26 host="ci-3510.3.8-n-34c19deec5" Sep 6 01:23:00.634028 env[1586]: 2025-09-06 01:23:00.573 [INFO][4902] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.61.64/26 handle="k8s-pod-network.2a8b575adb245716a57ea94547d7329748e231fc05c33b65dc72fd52df28f595" host="ci-3510.3.8-n-34c19deec5" Sep 6 01:23:00.634028 env[1586]: 2025-09-06 01:23:00.575 [INFO][4902] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2a8b575adb245716a57ea94547d7329748e231fc05c33b65dc72fd52df28f595 Sep 6 01:23:00.634028 env[1586]: 2025-09-06 01:23:00.582 [INFO][4902] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.61.64/26 handle="k8s-pod-network.2a8b575adb245716a57ea94547d7329748e231fc05c33b65dc72fd52df28f595" host="ci-3510.3.8-n-34c19deec5" Sep 6 01:23:00.634028 env[1586]: 2025-09-06 01:23:00.598 [INFO][4902] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.61.72/26] block=192.168.61.64/26 handle="k8s-pod-network.2a8b575adb245716a57ea94547d7329748e231fc05c33b65dc72fd52df28f595" host="ci-3510.3.8-n-34c19deec5" Sep 6 01:23:00.634028 env[1586]: 2025-09-06 01:23:00.598 [INFO][4902] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.61.72/26] handle="k8s-pod-network.2a8b575adb245716a57ea94547d7329748e231fc05c33b65dc72fd52df28f595" host="ci-3510.3.8-n-34c19deec5" Sep 6 01:23:00.634028 env[1586]: 2025-09-06 01:23:00.598 [INFO][4902] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 01:23:00.634028 env[1586]: 2025-09-06 01:23:00.598 [INFO][4902] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.61.72/26] IPv6=[] ContainerID="2a8b575adb245716a57ea94547d7329748e231fc05c33b65dc72fd52df28f595" HandleID="k8s-pod-network.2a8b575adb245716a57ea94547d7329748e231fc05c33b65dc72fd52df28f595" Workload="ci--3510.3.8--n--34c19deec5-k8s-coredns--7c65d6cfc9--d2dkt-eth0" Sep 6 01:23:00.634681 env[1586]: 2025-09-06 01:23:00.600 [INFO][4880] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2a8b575adb245716a57ea94547d7329748e231fc05c33b65dc72fd52df28f595" Namespace="kube-system" Pod="coredns-7c65d6cfc9-d2dkt" WorkloadEndpoint="ci--3510.3.8--n--34c19deec5-k8s-coredns--7c65d6cfc9--d2dkt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--34c19deec5-k8s-coredns--7c65d6cfc9--d2dkt-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7e1ef8a5-5849-44eb-8c06-2ea19305f74d", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 1, 22, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-34c19deec5", ContainerID:"", Pod:"coredns-7c65d6cfc9-d2dkt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali27cd9335711", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 01:23:00.634681 env[1586]: 2025-09-06 01:23:00.600 [INFO][4880] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.61.72/32] ContainerID="2a8b575adb245716a57ea94547d7329748e231fc05c33b65dc72fd52df28f595" Namespace="kube-system" Pod="coredns-7c65d6cfc9-d2dkt" WorkloadEndpoint="ci--3510.3.8--n--34c19deec5-k8s-coredns--7c65d6cfc9--d2dkt-eth0" Sep 6 01:23:00.634681 env[1586]: 2025-09-06 01:23:00.600 [INFO][4880] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali27cd9335711 ContainerID="2a8b575adb245716a57ea94547d7329748e231fc05c33b65dc72fd52df28f595" Namespace="kube-system" Pod="coredns-7c65d6cfc9-d2dkt" WorkloadEndpoint="ci--3510.3.8--n--34c19deec5-k8s-coredns--7c65d6cfc9--d2dkt-eth0" Sep 6 01:23:00.634681 env[1586]: 2025-09-06 01:23:00.613 [INFO][4880] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2a8b575adb245716a57ea94547d7329748e231fc05c33b65dc72fd52df28f595" Namespace="kube-system" Pod="coredns-7c65d6cfc9-d2dkt" WorkloadEndpoint="ci--3510.3.8--n--34c19deec5-k8s-coredns--7c65d6cfc9--d2dkt-eth0" Sep 6 01:23:00.634681 env[1586]: 2025-09-06 01:23:00.613 [INFO][4880] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2a8b575adb245716a57ea94547d7329748e231fc05c33b65dc72fd52df28f595" Namespace="kube-system" Pod="coredns-7c65d6cfc9-d2dkt" WorkloadEndpoint="ci--3510.3.8--n--34c19deec5-k8s-coredns--7c65d6cfc9--d2dkt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--34c19deec5-k8s-coredns--7c65d6cfc9--d2dkt-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7e1ef8a5-5849-44eb-8c06-2ea19305f74d", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 1, 22, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-34c19deec5", ContainerID:"2a8b575adb245716a57ea94547d7329748e231fc05c33b65dc72fd52df28f595", Pod:"coredns-7c65d6cfc9-d2dkt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali27cd9335711", MAC:"a2:df:1b:68:6b:24", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 01:23:00.634681 env[1586]: 2025-09-06 01:23:00.627 [INFO][4880] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2a8b575adb245716a57ea94547d7329748e231fc05c33b65dc72fd52df28f595" Namespace="kube-system" Pod="coredns-7c65d6cfc9-d2dkt" WorkloadEndpoint="ci--3510.3.8--n--34c19deec5-k8s-coredns--7c65d6cfc9--d2dkt-eth0" Sep 6 01:23:00.648000 audit[4999]: NETFILTER_CFG table=filter:121 family=2 entries=58 op=nft_register_chain pid=4999 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 6 01:23:00.655219 kernel: kauditd_printk_skb: 29 callbacks suppressed Sep 6 01:23:00.655336 kernel: audit: type=1325 audit(1757121780.648:438): table=filter:121 family=2 entries=58 op=nft_register_chain pid=4999 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 6 01:23:00.648000 audit[4999]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=27164 a0=3 a1=ffffed1c9770 a2=0 a3=ffff92f88fa8 items=0 ppid=3942 pid=4999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:23:00.697518 env[1586]: time="2025-09-06T01:23:00.684833262Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:23:00.697518 env[1586]: time="2025-09-06T01:23:00.684873302Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:23:00.697518 env[1586]: time="2025-09-06T01:23:00.684883342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:23:00.648000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 6 01:23:00.726755 env[1586]: time="2025-09-06T01:23:00.719056000Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2a8b575adb245716a57ea94547d7329748e231fc05c33b65dc72fd52df28f595 pid=5008 runtime=io.containerd.runc.v2 Sep 6 01:23:00.734751 kernel: audit: type=1300 audit(1757121780.648:438): arch=c00000b7 syscall=211 success=yes exit=27164 a0=3 a1=ffffed1c9770 a2=0 a3=ffff92f88fa8 items=0 ppid=3942 pid=4999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:23:00.734923 kernel: audit: type=1327 audit(1757121780.648:438): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 6 01:23:00.760887 env[1586]: time="2025-09-06T01:23:00.760850941Z" level=info msg="StartContainer for \"cca70ce4b51f362b460bb04ac252cbd8058b846ea7bbb6b12dcd6cebcf991c90\" returns successfully" Sep 6 01:23:00.792000 audit[5045]: NETFILTER_CFG table=filter:122 family=2 entries=54 op=nft_register_chain pid=5045 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 6 01:23:00.792000 audit[5045]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=25540 a0=3 a1=ffffe3229b90 a2=0 a3=ffffad356fa8 items=0 ppid=3942 pid=5045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:23:00.839518 kernel: audit: type=1325 audit(1757121780.792:439): table=filter:122 family=2 entries=54 op=nft_register_chain pid=5045 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 6 01:23:00.840574 kernel: audit: type=1300 audit(1757121780.792:439): arch=c00000b7 syscall=211 success=yes exit=25540 a0=3 a1=ffffe3229b90 a2=0 a3=ffffad356fa8 items=0 ppid=3942 pid=5045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:23:00.792000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 6 01:23:00.841973 env[1586]: time="2025-09-06T01:23:00.841938183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9fd44b5cd-ggmhh,Uid:f4c05567-6639-4b0f-94fa-e71542024f21,Namespace:calico-system,Attempt:1,} returns sandbox id \"19c564915bda609549386b4e50512df3de3138779a03d7052fe61eff03eb6e22\"" Sep 6 01:23:00.856506 kernel: audit: type=1327 audit(1757121780.792:439): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 6 01:23:00.880482 env[1586]: time="2025-09-06T01:23:00.880437762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-d2dkt,Uid:7e1ef8a5-5849-44eb-8c06-2ea19305f74d,Namespace:kube-system,Attempt:1,} returns sandbox id \"2a8b575adb245716a57ea94547d7329748e231fc05c33b65dc72fd52df28f595\"" Sep 6 01:23:00.883876 env[1586]: time="2025-09-06T01:23:00.883844884Z" level=info msg="CreateContainer within sandbox \"2a8b575adb245716a57ea94547d7329748e231fc05c33b65dc72fd52df28f595\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 6 01:23:00.958234 env[1586]: time="2025-09-06T01:23:00.958129602Z" level=info msg="CreateContainer within sandbox \"2a8b575adb245716a57ea94547d7329748e231fc05c33b65dc72fd52df28f595\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"090bd2290ba30304248da9d8318b4190f97198bd1c12a72703a56e6a9aab08a9\"" Sep 6 01:23:00.959629 env[1586]: time="2025-09-06T01:23:00.959600963Z" level=info msg="StartContainer for \"090bd2290ba30304248da9d8318b4190f97198bd1c12a72703a56e6a9aab08a9\"" Sep 6 01:23:01.012034 env[1586]: time="2025-09-06T01:23:01.011979029Z" level=info msg="StartContainer for \"090bd2290ba30304248da9d8318b4190f97198bd1c12a72703a56e6a9aab08a9\" returns successfully" Sep 6 01:23:01.582441 kubelet[2729]: I0906 01:23:01.582382 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-d2dkt" podStartSLOduration=45.582366797 podStartE2EDuration="45.582366797s" podCreationTimestamp="2025-09-06 01:22:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 01:23:01.580725916 +0000 UTC m=+51.468830126" watchObservedRunningTime="2025-09-06 01:23:01.582366797 +0000 UTC m=+51.470471007" Sep 6 01:23:01.615000 audit[5102]: NETFILTER_CFG table=filter:123 family=2 entries=12 op=nft_register_rule pid=5102 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:23:01.615000 audit[5102]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=ffffca2e2740 a2=0 a3=1 items=0 ppid=2834 pid=5102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:23:01.657081 kernel: audit: type=1325 audit(1757121781.615:440): table=filter:123 family=2 entries=12 op=nft_register_rule pid=5102 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:23:01.657262 kernel: audit: type=1300 audit(1757121781.615:440): arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=ffffca2e2740 a2=0 a3=1 items=0 ppid=2834 pid=5102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:23:01.615000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:23:01.670537 kernel: audit: type=1327 audit(1757121781.615:440): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:23:01.671000 audit[5102]: NETFILTER_CFG table=nat:124 family=2 entries=46 op=nft_register_rule pid=5102 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:23:01.671000 audit[5102]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14964 a0=3 a1=ffffca2e2740 a2=0 a3=1 items=0 ppid=2834 pid=5102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:23:01.671000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:23:01.694266 kernel: audit: type=1325 audit(1757121781.671:441): table=nat:124 family=2 entries=46 op=nft_register_rule pid=5102 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:23:02.117442 systemd-networkd[1753]: cali27cd9335711: Gained IPv6LL Sep 6 01:23:02.309400 systemd-networkd[1753]: calib6453dd5dc0: Gained IPv6LL Sep 6 01:23:02.672000 audit[5110]: NETFILTER_CFG table=filter:125 family=2 entries=12 op=nft_register_rule pid=5110 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:23:02.672000 audit[5110]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=fffff18b5570 a2=0 a3=1 items=0 ppid=2834 pid=5110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:23:02.672000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:23:02.711000 audit[5110]: NETFILTER_CFG table=nat:126 family=2 entries=58 op=nft_register_chain pid=5110 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:23:02.711000 audit[5110]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=20628 a0=3 a1=fffff18b5570 a2=0 a3=1 items=0 ppid=2834 pid=5110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:23:02.711000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:23:02.745393 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3157448730.mount: Deactivated successfully. Sep 6 01:23:04.367490 env[1586]: time="2025-09-06T01:23:04.367449985Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:23:04.388733 env[1586]: time="2025-09-06T01:23:04.388694515Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:23:04.398224 env[1586]: time="2025-09-06T01:23:04.398169880Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:23:04.407618 env[1586]: time="2025-09-06T01:23:04.407586125Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:23:04.408226 env[1586]: time="2025-09-06T01:23:04.408199565Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 6 01:23:04.410736 env[1586]: time="2025-09-06T01:23:04.410695526Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 6 01:23:04.411482 env[1586]: time="2025-09-06T01:23:04.411443447Z" level=info msg="CreateContainer within sandbox \"b3c856f56c2147b337451754c1d85321384641e72bb287a2f1f595ecf3a0e1a5\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 6 01:23:04.487699 env[1586]: time="2025-09-06T01:23:04.487645124Z" level=info msg="CreateContainer within sandbox \"b3c856f56c2147b337451754c1d85321384641e72bb287a2f1f595ecf3a0e1a5\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"b08420d6fe328983dd2438f8c261e713497b7420bc85e27786f356502947a85c\"" Sep 6 01:23:04.488380 env[1586]: time="2025-09-06T01:23:04.488356764Z" level=info msg="StartContainer for \"b08420d6fe328983dd2438f8c261e713497b7420bc85e27786f356502947a85c\"" Sep 6 01:23:04.564042 env[1586]: time="2025-09-06T01:23:04.559402199Z" level=info msg="StartContainer for \"b08420d6fe328983dd2438f8c261e713497b7420bc85e27786f356502947a85c\" returns successfully" Sep 6 01:23:04.603858 systemd[1]: run-containerd-runc-k8s.io-b08420d6fe328983dd2438f8c261e713497b7420bc85e27786f356502947a85c-runc.k6eVIm.mount: Deactivated successfully. Sep 6 01:23:04.606257 kubelet[2729]: I0906 01:23:04.606165 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-nn22p" podStartSLOduration=24.935669549 podStartE2EDuration="32.606145902s" podCreationTimestamp="2025-09-06 01:22:32 +0000 UTC" firstStartedPulling="2025-09-06 01:22:56.739397013 +0000 UTC m=+46.627501183" lastFinishedPulling="2025-09-06 01:23:04.409873326 +0000 UTC m=+54.297977536" observedRunningTime="2025-09-06 01:23:04.605821342 +0000 UTC m=+54.493925552" watchObservedRunningTime="2025-09-06 01:23:04.606145902 +0000 UTC m=+54.494250112" Sep 6 01:23:04.640000 audit[5159]: NETFILTER_CFG table=filter:127 family=2 entries=12 op=nft_register_rule pid=5159 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:23:04.640000 audit[5159]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=fffff1459a00 a2=0 a3=1 items=0 ppid=2834 pid=5159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:23:04.640000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:23:04.661000 audit[5159]: NETFILTER_CFG table=nat:128 family=2 entries=22 op=nft_register_rule pid=5159 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:23:04.661000 audit[5159]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6540 a0=3 a1=fffff1459a00 a2=0 a3=1 items=0 ppid=2834 pid=5159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:23:04.661000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:23:04.808337 env[1586]: time="2025-09-06T01:23:04.808292242Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:23:04.827044 env[1586]: time="2025-09-06T01:23:04.826988011Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:23:04.835233 env[1586]: time="2025-09-06T01:23:04.835192615Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:23:04.845233 env[1586]: time="2025-09-06T01:23:04.845179100Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:23:04.845890 env[1586]: time="2025-09-06T01:23:04.845858020Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 6 01:23:04.847576 env[1586]: time="2025-09-06T01:23:04.847543901Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 6 01:23:04.848554 env[1586]: time="2025-09-06T01:23:04.848521061Z" level=info msg="CreateContainer within sandbox \"5ff95b7253718709f7058e0fc2bf64384c91a266951c5b7568c976fb5d1ca580\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 6 01:23:04.911029 env[1586]: time="2025-09-06T01:23:04.910970452Z" level=info msg="CreateContainer within sandbox \"5ff95b7253718709f7058e0fc2bf64384c91a266951c5b7568c976fb5d1ca580\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"37ea1e60a37faf15d2ecdfb8556234f2e0c8a15cbcdbd96aad6c18ada5bec7db\"" Sep 6 01:23:04.911904 env[1586]: time="2025-09-06T01:23:04.911858012Z" level=info msg="StartContainer for \"37ea1e60a37faf15d2ecdfb8556234f2e0c8a15cbcdbd96aad6c18ada5bec7db\"" Sep 6 01:23:05.014075 env[1586]: time="2025-09-06T01:23:05.013981343Z" level=info msg="StartContainer for \"37ea1e60a37faf15d2ecdfb8556234f2e0c8a15cbcdbd96aad6c18ada5bec7db\" returns successfully" Sep 6 01:23:05.615483 systemd[1]: run-containerd-runc-k8s.io-b08420d6fe328983dd2438f8c261e713497b7420bc85e27786f356502947a85c-runc.5JNwJm.mount: Deactivated successfully. Sep 6 01:23:05.648000 audit[5222]: NETFILTER_CFG table=filter:129 family=2 entries=12 op=nft_register_rule pid=5222 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:23:05.648000 audit[5222]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=fffff0588120 a2=0 a3=1 items=0 ppid=2834 pid=5222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:23:05.648000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:23:05.666285 kernel: kauditd_printk_skb: 17 callbacks suppressed Sep 6 01:23:05.666404 kernel: audit: type=1325 audit(1757121785.652:447): table=nat:130 family=2 entries=22 op=nft_register_rule pid=5222 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:23:05.652000 audit[5222]: NETFILTER_CFG table=nat:130 family=2 entries=22 op=nft_register_rule pid=5222 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:23:05.652000 audit[5222]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6540 a0=3 a1=fffff0588120 a2=0 a3=1 items=0 ppid=2834 pid=5222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:23:05.698780 kernel: audit: type=1300 audit(1757121785.652:447): arch=c00000b7 syscall=211 success=yes exit=6540 a0=3 a1=fffff0588120 a2=0 a3=1 items=0 ppid=2834 pid=5222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:23:05.652000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:23:05.712144 kernel: audit: type=1327 audit(1757121785.652:447): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:23:06.600606 kubelet[2729]: I0906 01:23:06.600565 2729 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 6 01:23:06.844842 env[1586]: time="2025-09-06T01:23:06.844799631Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:23:06.859211 env[1586]: time="2025-09-06T01:23:06.859104878Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:23:06.868624 env[1586]: time="2025-09-06T01:23:06.868577882Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:23:06.876875 env[1586]: time="2025-09-06T01:23:06.876831166Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:23:06.877550 env[1586]: time="2025-09-06T01:23:06.877508886Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 6 01:23:06.880087 env[1586]: time="2025-09-06T01:23:06.880038928Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 6 01:23:06.880348 env[1586]: time="2025-09-06T01:23:06.880321608Z" level=info msg="CreateContainer within sandbox \"2218e28adb1761d6e0216ffabd2b8b9ac70c51b19e318aeda2a3debb31a235cc\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 6 01:23:06.955114 env[1586]: time="2025-09-06T01:23:06.955071084Z" level=info msg="CreateContainer within sandbox \"2218e28adb1761d6e0216ffabd2b8b9ac70c51b19e318aeda2a3debb31a235cc\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"3df6e3bd65a9433a97e7e3b49f84c3e7054b5cf6c1f40a72e2a82634d1eab613\"" Sep 6 01:23:06.956228 env[1586]: time="2025-09-06T01:23:06.956192604Z" level=info msg="StartContainer for \"3df6e3bd65a9433a97e7e3b49f84c3e7054b5cf6c1f40a72e2a82634d1eab613\"" Sep 6 01:23:06.999624 systemd[1]: run-containerd-runc-k8s.io-3df6e3bd65a9433a97e7e3b49f84c3e7054b5cf6c1f40a72e2a82634d1eab613-runc.PznfPU.mount: Deactivated successfully. Sep 6 01:23:07.051453 env[1586]: time="2025-09-06T01:23:07.051409770Z" level=info msg="StartContainer for \"3df6e3bd65a9433a97e7e3b49f84c3e7054b5cf6c1f40a72e2a82634d1eab613\" returns successfully" Sep 6 01:23:07.437679 kubelet[2729]: I0906 01:23:07.437645 2729 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 6 01:23:07.437902 kubelet[2729]: I0906 01:23:07.437693 2729 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 6 01:23:07.625935 kubelet[2729]: I0906 01:23:07.625861 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-56cb94fc6-8w5rf" podStartSLOduration=33.52676457 podStartE2EDuration="40.625843325s" podCreationTimestamp="2025-09-06 01:22:27 +0000 UTC" firstStartedPulling="2025-09-06 01:22:57.747708625 +0000 UTC m=+47.635812835" lastFinishedPulling="2025-09-06 01:23:04.84678738 +0000 UTC m=+54.734891590" observedRunningTime="2025-09-06 01:23:05.619441917 +0000 UTC m=+55.507546127" watchObservedRunningTime="2025-09-06 01:23:07.625843325 +0000 UTC m=+57.513947535" Sep 6 01:23:07.626404 kubelet[2729]: I0906 01:23:07.626145 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-r5mms" podStartSLOduration=24.738492734 podStartE2EDuration="35.626140005s" podCreationTimestamp="2025-09-06 01:22:32 +0000 UTC" firstStartedPulling="2025-09-06 01:22:55.991170896 +0000 UTC m=+45.879275066" lastFinishedPulling="2025-09-06 01:23:06.878818127 +0000 UTC m=+56.766922337" observedRunningTime="2025-09-06 01:23:07.625801245 +0000 UTC m=+57.513905455" watchObservedRunningTime="2025-09-06 01:23:07.626140005 +0000 UTC m=+57.514244215" Sep 6 01:23:09.456072 env[1586]: time="2025-09-06T01:23:09.456020234Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:23:09.471981 env[1586]: time="2025-09-06T01:23:09.471933841Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:23:09.481679 env[1586]: time="2025-09-06T01:23:09.481631366Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:23:09.486550 env[1586]: time="2025-09-06T01:23:09.486505288Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:23:09.487154 env[1586]: time="2025-09-06T01:23:09.487112169Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 6 01:23:09.503602 env[1586]: time="2025-09-06T01:23:09.502431776Z" level=info msg="CreateContainer within sandbox \"19c564915bda609549386b4e50512df3de3138779a03d7052fe61eff03eb6e22\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 6 01:23:09.573226 env[1586]: time="2025-09-06T01:23:09.573163009Z" level=info msg="CreateContainer within sandbox \"19c564915bda609549386b4e50512df3de3138779a03d7052fe61eff03eb6e22\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"778cf78c78b5b76721b95a03843c8a4a964e3e704bbc7af7a34a84fdf4c227ba\"" Sep 6 01:23:09.574086 env[1586]: time="2025-09-06T01:23:09.573912729Z" level=info msg="StartContainer for \"778cf78c78b5b76721b95a03843c8a4a964e3e704bbc7af7a34a84fdf4c227ba\"" Sep 6 01:23:09.670630 env[1586]: time="2025-09-06T01:23:09.670572055Z" level=info msg="StartContainer for \"778cf78c78b5b76721b95a03843c8a4a964e3e704bbc7af7a34a84fdf4c227ba\" returns successfully" Sep 6 01:23:10.273424 env[1586]: time="2025-09-06T01:23:10.273384218Z" level=info msg="StopPodSandbox for \"23fead3f824bc1f5ea71579e672e205b84cb4ec276e9bd218bd80bfaea886227\"" Sep 6 01:23:10.369191 env[1586]: 2025-09-06 01:23:10.326 [WARNING][5317] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="23fead3f824bc1f5ea71579e672e205b84cb4ec276e9bd218bd80bfaea886227" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--34c19deec5-k8s-goldmane--7988f88666--nn22p-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"6be53ee4-d329-4ede-8f50-6b8eeba7191c", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 1, 22, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-34c19deec5", ContainerID:"b3c856f56c2147b337451754c1d85321384641e72bb287a2f1f595ecf3a0e1a5", Pod:"goldmane-7988f88666-nn22p", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.61.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliba7521ece98", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 01:23:10.369191 env[1586]: 2025-09-06 01:23:10.326 [INFO][5317] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="23fead3f824bc1f5ea71579e672e205b84cb4ec276e9bd218bd80bfaea886227" Sep 6 01:23:10.369191 env[1586]: 2025-09-06 01:23:10.326 [INFO][5317] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="23fead3f824bc1f5ea71579e672e205b84cb4ec276e9bd218bd80bfaea886227" iface="eth0" netns="" Sep 6 01:23:10.369191 env[1586]: 2025-09-06 01:23:10.326 [INFO][5317] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="23fead3f824bc1f5ea71579e672e205b84cb4ec276e9bd218bd80bfaea886227" Sep 6 01:23:10.369191 env[1586]: 2025-09-06 01:23:10.327 [INFO][5317] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="23fead3f824bc1f5ea71579e672e205b84cb4ec276e9bd218bd80bfaea886227" Sep 6 01:23:10.369191 env[1586]: 2025-09-06 01:23:10.356 [INFO][5325] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="23fead3f824bc1f5ea71579e672e205b84cb4ec276e9bd218bd80bfaea886227" HandleID="k8s-pod-network.23fead3f824bc1f5ea71579e672e205b84cb4ec276e9bd218bd80bfaea886227" Workload="ci--3510.3.8--n--34c19deec5-k8s-goldmane--7988f88666--nn22p-eth0" Sep 6 01:23:10.369191 env[1586]: 2025-09-06 01:23:10.356 [INFO][5325] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 01:23:10.369191 env[1586]: 2025-09-06 01:23:10.356 [INFO][5325] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 01:23:10.369191 env[1586]: 2025-09-06 01:23:10.365 [WARNING][5325] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="23fead3f824bc1f5ea71579e672e205b84cb4ec276e9bd218bd80bfaea886227" HandleID="k8s-pod-network.23fead3f824bc1f5ea71579e672e205b84cb4ec276e9bd218bd80bfaea886227" Workload="ci--3510.3.8--n--34c19deec5-k8s-goldmane--7988f88666--nn22p-eth0" Sep 6 01:23:10.369191 env[1586]: 2025-09-06 01:23:10.365 [INFO][5325] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="23fead3f824bc1f5ea71579e672e205b84cb4ec276e9bd218bd80bfaea886227" HandleID="k8s-pod-network.23fead3f824bc1f5ea71579e672e205b84cb4ec276e9bd218bd80bfaea886227" Workload="ci--3510.3.8--n--34c19deec5-k8s-goldmane--7988f88666--nn22p-eth0" Sep 6 01:23:10.369191 env[1586]: 2025-09-06 01:23:10.366 [INFO][5325] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 01:23:10.369191 env[1586]: 2025-09-06 01:23:10.367 [INFO][5317] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="23fead3f824bc1f5ea71579e672e205b84cb4ec276e9bd218bd80bfaea886227" Sep 6 01:23:10.369646 env[1586]: time="2025-09-06T01:23:10.369230503Z" level=info msg="TearDown network for sandbox \"23fead3f824bc1f5ea71579e672e205b84cb4ec276e9bd218bd80bfaea886227\" successfully" Sep 6 01:23:10.369646 env[1586]: time="2025-09-06T01:23:10.369281943Z" level=info msg="StopPodSandbox for \"23fead3f824bc1f5ea71579e672e205b84cb4ec276e9bd218bd80bfaea886227\" returns successfully" Sep 6 01:23:10.369912 env[1586]: time="2025-09-06T01:23:10.369880143Z" level=info msg="RemovePodSandbox for \"23fead3f824bc1f5ea71579e672e205b84cb4ec276e9bd218bd80bfaea886227\"" Sep 6 01:23:10.369969 env[1586]: time="2025-09-06T01:23:10.369916143Z" level=info msg="Forcibly stopping sandbox \"23fead3f824bc1f5ea71579e672e205b84cb4ec276e9bd218bd80bfaea886227\"" Sep 6 01:23:10.502891 env[1586]: 2025-09-06 01:23:10.443 [WARNING][5340] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="23fead3f824bc1f5ea71579e672e205b84cb4ec276e9bd218bd80bfaea886227" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--34c19deec5-k8s-goldmane--7988f88666--nn22p-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"6be53ee4-d329-4ede-8f50-6b8eeba7191c", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 1, 22, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-34c19deec5", ContainerID:"b3c856f56c2147b337451754c1d85321384641e72bb287a2f1f595ecf3a0e1a5", Pod:"goldmane-7988f88666-nn22p", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.61.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliba7521ece98", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 01:23:10.502891 env[1586]: 2025-09-06 01:23:10.443 [INFO][5340] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="23fead3f824bc1f5ea71579e672e205b84cb4ec276e9bd218bd80bfaea886227" Sep 6 01:23:10.502891 env[1586]: 2025-09-06 01:23:10.443 [INFO][5340] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="23fead3f824bc1f5ea71579e672e205b84cb4ec276e9bd218bd80bfaea886227" iface="eth0" netns="" Sep 6 01:23:10.502891 env[1586]: 2025-09-06 01:23:10.443 [INFO][5340] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="23fead3f824bc1f5ea71579e672e205b84cb4ec276e9bd218bd80bfaea886227" Sep 6 01:23:10.502891 env[1586]: 2025-09-06 01:23:10.443 [INFO][5340] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="23fead3f824bc1f5ea71579e672e205b84cb4ec276e9bd218bd80bfaea886227" Sep 6 01:23:10.502891 env[1586]: 2025-09-06 01:23:10.472 [INFO][5347] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="23fead3f824bc1f5ea71579e672e205b84cb4ec276e9bd218bd80bfaea886227" HandleID="k8s-pod-network.23fead3f824bc1f5ea71579e672e205b84cb4ec276e9bd218bd80bfaea886227" Workload="ci--3510.3.8--n--34c19deec5-k8s-goldmane--7988f88666--nn22p-eth0" Sep 6 01:23:10.502891 env[1586]: 2025-09-06 01:23:10.472 [INFO][5347] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 01:23:10.502891 env[1586]: 2025-09-06 01:23:10.472 [INFO][5347] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 01:23:10.502891 env[1586]: 2025-09-06 01:23:10.489 [WARNING][5347] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="23fead3f824bc1f5ea71579e672e205b84cb4ec276e9bd218bd80bfaea886227" HandleID="k8s-pod-network.23fead3f824bc1f5ea71579e672e205b84cb4ec276e9bd218bd80bfaea886227" Workload="ci--3510.3.8--n--34c19deec5-k8s-goldmane--7988f88666--nn22p-eth0" Sep 6 01:23:10.502891 env[1586]: 2025-09-06 01:23:10.489 [INFO][5347] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="23fead3f824bc1f5ea71579e672e205b84cb4ec276e9bd218bd80bfaea886227" HandleID="k8s-pod-network.23fead3f824bc1f5ea71579e672e205b84cb4ec276e9bd218bd80bfaea886227" Workload="ci--3510.3.8--n--34c19deec5-k8s-goldmane--7988f88666--nn22p-eth0" Sep 6 01:23:10.502891 env[1586]: 2025-09-06 01:23:10.500 [INFO][5347] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 01:23:10.502891 env[1586]: 2025-09-06 01:23:10.501 [INFO][5340] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="23fead3f824bc1f5ea71579e672e205b84cb4ec276e9bd218bd80bfaea886227" Sep 6 01:23:10.503518 env[1586]: time="2025-09-06T01:23:10.502891565Z" level=info msg="TearDown network for sandbox \"23fead3f824bc1f5ea71579e672e205b84cb4ec276e9bd218bd80bfaea886227\" successfully" Sep 6 01:23:10.521192 env[1586]: time="2025-09-06T01:23:10.521137334Z" level=info msg="RemovePodSandbox \"23fead3f824bc1f5ea71579e672e205b84cb4ec276e9bd218bd80bfaea886227\" returns successfully" Sep 6 01:23:10.521717 env[1586]: time="2025-09-06T01:23:10.521687854Z" level=info msg="StopPodSandbox for \"d6dd37adf893ea15f62d8619dd0427b888e8647e5f30926a70b3a493561fd1a1\"" Sep 6 01:23:10.610915 env[1586]: 2025-09-06 01:23:10.572 [WARNING][5361] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d6dd37adf893ea15f62d8619dd0427b888e8647e5f30926a70b3a493561fd1a1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--34c19deec5-k8s-coredns--7c65d6cfc9--d2dkt-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7e1ef8a5-5849-44eb-8c06-2ea19305f74d", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 1, 22, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-34c19deec5", ContainerID:"2a8b575adb245716a57ea94547d7329748e231fc05c33b65dc72fd52df28f595", Pod:"coredns-7c65d6cfc9-d2dkt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali27cd9335711", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 01:23:10.610915 env[1586]: 2025-09-06 01:23:10.573 [INFO][5361] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d6dd37adf893ea15f62d8619dd0427b888e8647e5f30926a70b3a493561fd1a1" Sep 6 01:23:10.610915 env[1586]: 2025-09-06 01:23:10.573 [INFO][5361] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d6dd37adf893ea15f62d8619dd0427b888e8647e5f30926a70b3a493561fd1a1" iface="eth0" netns="" Sep 6 01:23:10.610915 env[1586]: 2025-09-06 01:23:10.573 [INFO][5361] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d6dd37adf893ea15f62d8619dd0427b888e8647e5f30926a70b3a493561fd1a1" Sep 6 01:23:10.610915 env[1586]: 2025-09-06 01:23:10.573 [INFO][5361] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d6dd37adf893ea15f62d8619dd0427b888e8647e5f30926a70b3a493561fd1a1" Sep 6 01:23:10.610915 env[1586]: 2025-09-06 01:23:10.594 [INFO][5368] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d6dd37adf893ea15f62d8619dd0427b888e8647e5f30926a70b3a493561fd1a1" HandleID="k8s-pod-network.d6dd37adf893ea15f62d8619dd0427b888e8647e5f30926a70b3a493561fd1a1" Workload="ci--3510.3.8--n--34c19deec5-k8s-coredns--7c65d6cfc9--d2dkt-eth0" Sep 6 01:23:10.610915 env[1586]: 2025-09-06 01:23:10.594 [INFO][5368] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 01:23:10.610915 env[1586]: 2025-09-06 01:23:10.595 [INFO][5368] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 01:23:10.610915 env[1586]: 2025-09-06 01:23:10.603 [WARNING][5368] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d6dd37adf893ea15f62d8619dd0427b888e8647e5f30926a70b3a493561fd1a1" HandleID="k8s-pod-network.d6dd37adf893ea15f62d8619dd0427b888e8647e5f30926a70b3a493561fd1a1" Workload="ci--3510.3.8--n--34c19deec5-k8s-coredns--7c65d6cfc9--d2dkt-eth0" Sep 6 01:23:10.610915 env[1586]: 2025-09-06 01:23:10.603 [INFO][5368] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d6dd37adf893ea15f62d8619dd0427b888e8647e5f30926a70b3a493561fd1a1" HandleID="k8s-pod-network.d6dd37adf893ea15f62d8619dd0427b888e8647e5f30926a70b3a493561fd1a1" Workload="ci--3510.3.8--n--34c19deec5-k8s-coredns--7c65d6cfc9--d2dkt-eth0" Sep 6 01:23:10.610915 env[1586]: 2025-09-06 01:23:10.604 [INFO][5368] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 01:23:10.610915 env[1586]: 2025-09-06 01:23:10.609 [INFO][5361] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d6dd37adf893ea15f62d8619dd0427b888e8647e5f30926a70b3a493561fd1a1" Sep 6 01:23:10.610915 env[1586]: time="2025-09-06T01:23:10.610890256Z" level=info msg="TearDown network for sandbox \"d6dd37adf893ea15f62d8619dd0427b888e8647e5f30926a70b3a493561fd1a1\" successfully" Sep 6 01:23:10.611368 env[1586]: time="2025-09-06T01:23:10.610921696Z" level=info msg="StopPodSandbox for \"d6dd37adf893ea15f62d8619dd0427b888e8647e5f30926a70b3a493561fd1a1\" returns successfully" Sep 6 01:23:10.612389 env[1586]: time="2025-09-06T01:23:10.611438216Z" level=info msg="RemovePodSandbox for \"d6dd37adf893ea15f62d8619dd0427b888e8647e5f30926a70b3a493561fd1a1\"" Sep 6 01:23:10.612389 env[1586]: time="2025-09-06T01:23:10.611472776Z" level=info msg="Forcibly stopping sandbox \"d6dd37adf893ea15f62d8619dd0427b888e8647e5f30926a70b3a493561fd1a1\"" Sep 6 01:23:10.709356 systemd[1]: run-containerd-runc-k8s.io-778cf78c78b5b76721b95a03843c8a4a964e3e704bbc7af7a34a84fdf4c227ba-runc.5SN3La.mount: Deactivated successfully. Sep 6 01:23:10.777821 env[1586]: 2025-09-06 01:23:10.677 [WARNING][5382] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d6dd37adf893ea15f62d8619dd0427b888e8647e5f30926a70b3a493561fd1a1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--34c19deec5-k8s-coredns--7c65d6cfc9--d2dkt-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7e1ef8a5-5849-44eb-8c06-2ea19305f74d", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 1, 22, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-34c19deec5", ContainerID:"2a8b575adb245716a57ea94547d7329748e231fc05c33b65dc72fd52df28f595", Pod:"coredns-7c65d6cfc9-d2dkt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali27cd9335711", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 01:23:10.777821 env[1586]: 2025-09-06 01:23:10.678 [INFO][5382] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d6dd37adf893ea15f62d8619dd0427b888e8647e5f30926a70b3a493561fd1a1" Sep 6 01:23:10.777821 env[1586]: 2025-09-06 01:23:10.678 [INFO][5382] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d6dd37adf893ea15f62d8619dd0427b888e8647e5f30926a70b3a493561fd1a1" iface="eth0" netns="" Sep 6 01:23:10.777821 env[1586]: 2025-09-06 01:23:10.678 [INFO][5382] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d6dd37adf893ea15f62d8619dd0427b888e8647e5f30926a70b3a493561fd1a1" Sep 6 01:23:10.777821 env[1586]: 2025-09-06 01:23:10.678 [INFO][5382] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d6dd37adf893ea15f62d8619dd0427b888e8647e5f30926a70b3a493561fd1a1" Sep 6 01:23:10.777821 env[1586]: 2025-09-06 01:23:10.754 [INFO][5395] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d6dd37adf893ea15f62d8619dd0427b888e8647e5f30926a70b3a493561fd1a1" HandleID="k8s-pod-network.d6dd37adf893ea15f62d8619dd0427b888e8647e5f30926a70b3a493561fd1a1" Workload="ci--3510.3.8--n--34c19deec5-k8s-coredns--7c65d6cfc9--d2dkt-eth0" Sep 6 01:23:10.777821 env[1586]: 2025-09-06 01:23:10.759 [INFO][5395] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 01:23:10.777821 env[1586]: 2025-09-06 01:23:10.759 [INFO][5395] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 01:23:10.777821 env[1586]: 2025-09-06 01:23:10.773 [WARNING][5395] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d6dd37adf893ea15f62d8619dd0427b888e8647e5f30926a70b3a493561fd1a1" HandleID="k8s-pod-network.d6dd37adf893ea15f62d8619dd0427b888e8647e5f30926a70b3a493561fd1a1" Workload="ci--3510.3.8--n--34c19deec5-k8s-coredns--7c65d6cfc9--d2dkt-eth0" Sep 6 01:23:10.777821 env[1586]: 2025-09-06 01:23:10.773 [INFO][5395] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d6dd37adf893ea15f62d8619dd0427b888e8647e5f30926a70b3a493561fd1a1" HandleID="k8s-pod-network.d6dd37adf893ea15f62d8619dd0427b888e8647e5f30926a70b3a493561fd1a1" Workload="ci--3510.3.8--n--34c19deec5-k8s-coredns--7c65d6cfc9--d2dkt-eth0" Sep 6 01:23:10.777821 env[1586]: 2025-09-06 01:23:10.775 [INFO][5395] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 01:23:10.777821 env[1586]: 2025-09-06 01:23:10.776 [INFO][5382] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d6dd37adf893ea15f62d8619dd0427b888e8647e5f30926a70b3a493561fd1a1" Sep 6 01:23:10.778271 env[1586]: time="2025-09-06T01:23:10.777851574Z" level=info msg="TearDown network for sandbox \"d6dd37adf893ea15f62d8619dd0427b888e8647e5f30926a70b3a493561fd1a1\" successfully" Sep 6 01:23:10.796795 env[1586]: time="2025-09-06T01:23:10.796731262Z" level=info msg="RemovePodSandbox \"d6dd37adf893ea15f62d8619dd0427b888e8647e5f30926a70b3a493561fd1a1\" returns successfully" Sep 6 01:23:10.797277 env[1586]: time="2025-09-06T01:23:10.797231903Z" level=info msg="StopPodSandbox for \"645064afc4d385ac4cb23154566f4dc71bbe72ec4a523283a85dc30873d7bab8\"" Sep 6 01:23:10.932219 kubelet[2729]: I0906 01:23:10.932127 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-9fd44b5cd-ggmhh" podStartSLOduration=30.308610311 podStartE2EDuration="38.932108406s" podCreationTimestamp="2025-09-06 01:22:32 +0000 UTC" firstStartedPulling="2025-09-06 01:23:00.864707714 +0000 UTC m=+50.752811924" lastFinishedPulling="2025-09-06 01:23:09.488205849 +0000 UTC m=+59.376310019" observedRunningTime="2025-09-06 01:23:10.650824794 +0000 UTC m=+60.538929004" watchObservedRunningTime="2025-09-06 01:23:10.932108406 +0000 UTC m=+60.820212576" Sep 6 01:23:11.000507 env[1586]: 2025-09-06 01:23:10.897 [WARNING][5419] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="645064afc4d385ac4cb23154566f4dc71bbe72ec4a523283a85dc30873d7bab8" WorkloadEndpoint="ci--3510.3.8--n--34c19deec5-k8s-whisker--d784c57d5--vskf8-eth0" Sep 6 01:23:11.000507 env[1586]: 2025-09-06 01:23:10.898 [INFO][5419] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="645064afc4d385ac4cb23154566f4dc71bbe72ec4a523283a85dc30873d7bab8" Sep 6 01:23:11.000507 env[1586]: 2025-09-06 01:23:10.898 [INFO][5419] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="645064afc4d385ac4cb23154566f4dc71bbe72ec4a523283a85dc30873d7bab8" iface="eth0" netns="" Sep 6 01:23:11.000507 env[1586]: 2025-09-06 01:23:10.899 [INFO][5419] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="645064afc4d385ac4cb23154566f4dc71bbe72ec4a523283a85dc30873d7bab8" Sep 6 01:23:11.000507 env[1586]: 2025-09-06 01:23:10.899 [INFO][5419] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="645064afc4d385ac4cb23154566f4dc71bbe72ec4a523283a85dc30873d7bab8" Sep 6 01:23:11.000507 env[1586]: 2025-09-06 01:23:10.987 [INFO][5431] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="645064afc4d385ac4cb23154566f4dc71bbe72ec4a523283a85dc30873d7bab8" HandleID="k8s-pod-network.645064afc4d385ac4cb23154566f4dc71bbe72ec4a523283a85dc30873d7bab8" Workload="ci--3510.3.8--n--34c19deec5-k8s-whisker--d784c57d5--vskf8-eth0" Sep 6 01:23:11.000507 env[1586]: 2025-09-06 01:23:10.987 [INFO][5431] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 01:23:11.000507 env[1586]: 2025-09-06 01:23:10.988 [INFO][5431] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 01:23:11.000507 env[1586]: 2025-09-06 01:23:10.996 [WARNING][5431] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="645064afc4d385ac4cb23154566f4dc71bbe72ec4a523283a85dc30873d7bab8" HandleID="k8s-pod-network.645064afc4d385ac4cb23154566f4dc71bbe72ec4a523283a85dc30873d7bab8" Workload="ci--3510.3.8--n--34c19deec5-k8s-whisker--d784c57d5--vskf8-eth0" Sep 6 01:23:11.000507 env[1586]: 2025-09-06 01:23:10.996 [INFO][5431] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="645064afc4d385ac4cb23154566f4dc71bbe72ec4a523283a85dc30873d7bab8" HandleID="k8s-pod-network.645064afc4d385ac4cb23154566f4dc71bbe72ec4a523283a85dc30873d7bab8" Workload="ci--3510.3.8--n--34c19deec5-k8s-whisker--d784c57d5--vskf8-eth0" Sep 6 01:23:11.000507 env[1586]: 2025-09-06 01:23:10.997 [INFO][5431] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 01:23:11.000507 env[1586]: 2025-09-06 01:23:10.999 [INFO][5419] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="645064afc4d385ac4cb23154566f4dc71bbe72ec4a523283a85dc30873d7bab8" Sep 6 01:23:11.000507 env[1586]: time="2025-09-06T01:23:11.000334918Z" level=info msg="TearDown network for sandbox \"645064afc4d385ac4cb23154566f4dc71bbe72ec4a523283a85dc30873d7bab8\" successfully" Sep 6 01:23:11.000507 env[1586]: time="2025-09-06T01:23:11.000361718Z" level=info msg="StopPodSandbox for \"645064afc4d385ac4cb23154566f4dc71bbe72ec4a523283a85dc30873d7bab8\" returns successfully" Sep 6 01:23:11.001099 env[1586]: time="2025-09-06T01:23:11.001076278Z" level=info msg="RemovePodSandbox for \"645064afc4d385ac4cb23154566f4dc71bbe72ec4a523283a85dc30873d7bab8\"" Sep 6 01:23:11.001211 env[1586]: time="2025-09-06T01:23:11.001176678Z" level=info msg="Forcibly stopping sandbox \"645064afc4d385ac4cb23154566f4dc71bbe72ec4a523283a85dc30873d7bab8\"" Sep 6 01:23:11.086586 env[1586]: 2025-09-06 01:23:11.040 [WARNING][5447] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="645064afc4d385ac4cb23154566f4dc71bbe72ec4a523283a85dc30873d7bab8" WorkloadEndpoint="ci--3510.3.8--n--34c19deec5-k8s-whisker--d784c57d5--vskf8-eth0" Sep 6 01:23:11.086586 env[1586]: 2025-09-06 01:23:11.040 [INFO][5447] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="645064afc4d385ac4cb23154566f4dc71bbe72ec4a523283a85dc30873d7bab8" Sep 6 01:23:11.086586 env[1586]: 2025-09-06 01:23:11.040 [INFO][5447] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="645064afc4d385ac4cb23154566f4dc71bbe72ec4a523283a85dc30873d7bab8" iface="eth0" netns="" Sep 6 01:23:11.086586 env[1586]: 2025-09-06 01:23:11.040 [INFO][5447] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="645064afc4d385ac4cb23154566f4dc71bbe72ec4a523283a85dc30873d7bab8" Sep 6 01:23:11.086586 env[1586]: 2025-09-06 01:23:11.040 [INFO][5447] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="645064afc4d385ac4cb23154566f4dc71bbe72ec4a523283a85dc30873d7bab8" Sep 6 01:23:11.086586 env[1586]: 2025-09-06 01:23:11.072 [INFO][5454] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="645064afc4d385ac4cb23154566f4dc71bbe72ec4a523283a85dc30873d7bab8" HandleID="k8s-pod-network.645064afc4d385ac4cb23154566f4dc71bbe72ec4a523283a85dc30873d7bab8" Workload="ci--3510.3.8--n--34c19deec5-k8s-whisker--d784c57d5--vskf8-eth0" Sep 6 01:23:11.086586 env[1586]: 2025-09-06 01:23:11.072 [INFO][5454] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 01:23:11.086586 env[1586]: 2025-09-06 01:23:11.072 [INFO][5454] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 01:23:11.086586 env[1586]: 2025-09-06 01:23:11.080 [WARNING][5454] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="645064afc4d385ac4cb23154566f4dc71bbe72ec4a523283a85dc30873d7bab8" HandleID="k8s-pod-network.645064afc4d385ac4cb23154566f4dc71bbe72ec4a523283a85dc30873d7bab8" Workload="ci--3510.3.8--n--34c19deec5-k8s-whisker--d784c57d5--vskf8-eth0" Sep 6 01:23:11.086586 env[1586]: 2025-09-06 01:23:11.080 [INFO][5454] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="645064afc4d385ac4cb23154566f4dc71bbe72ec4a523283a85dc30873d7bab8" HandleID="k8s-pod-network.645064afc4d385ac4cb23154566f4dc71bbe72ec4a523283a85dc30873d7bab8" Workload="ci--3510.3.8--n--34c19deec5-k8s-whisker--d784c57d5--vskf8-eth0" Sep 6 01:23:11.086586 env[1586]: 2025-09-06 01:23:11.082 [INFO][5454] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 01:23:11.086586 env[1586]: 2025-09-06 01:23:11.083 [INFO][5447] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="645064afc4d385ac4cb23154566f4dc71bbe72ec4a523283a85dc30873d7bab8" Sep 6 01:23:11.087042 env[1586]: time="2025-09-06T01:23:11.086998518Z" level=info msg="TearDown network for sandbox \"645064afc4d385ac4cb23154566f4dc71bbe72ec4a523283a85dc30873d7bab8\" successfully" Sep 6 01:23:11.105043 env[1586]: time="2025-09-06T01:23:11.104998366Z" level=info msg="RemovePodSandbox \"645064afc4d385ac4cb23154566f4dc71bbe72ec4a523283a85dc30873d7bab8\" returns successfully" Sep 6 01:23:11.105654 env[1586]: time="2025-09-06T01:23:11.105632606Z" level=info msg="StopPodSandbox for \"d8c4b477ff7552087a7528ea03e1762470b021e831c99971fa8d2a4391cdac6f\"" Sep 6 01:23:11.186919 env[1586]: 2025-09-06 01:23:11.143 [WARNING][5471] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d8c4b477ff7552087a7528ea03e1762470b021e831c99971fa8d2a4391cdac6f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--34c19deec5-k8s-coredns--7c65d6cfc9--5pdmz-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a84708b2-35a4-42bb-8dac-86ea0f5ddee1", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 1, 22, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-34c19deec5", ContainerID:"ccdc294e4a7e76e49c7b30ac6ed0e03f0452ca56f01b966a366c99793249fb79", Pod:"coredns-7c65d6cfc9-5pdmz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie8b34c027fc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 01:23:11.186919 env[1586]: 2025-09-06 01:23:11.143 [INFO][5471] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d8c4b477ff7552087a7528ea03e1762470b021e831c99971fa8d2a4391cdac6f" Sep 6 01:23:11.186919 env[1586]: 2025-09-06 01:23:11.144 [INFO][5471] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d8c4b477ff7552087a7528ea03e1762470b021e831c99971fa8d2a4391cdac6f" iface="eth0" netns="" Sep 6 01:23:11.186919 env[1586]: 2025-09-06 01:23:11.144 [INFO][5471] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d8c4b477ff7552087a7528ea03e1762470b021e831c99971fa8d2a4391cdac6f" Sep 6 01:23:11.186919 env[1586]: 2025-09-06 01:23:11.144 [INFO][5471] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d8c4b477ff7552087a7528ea03e1762470b021e831c99971fa8d2a4391cdac6f" Sep 6 01:23:11.186919 env[1586]: 2025-09-06 01:23:11.169 [INFO][5478] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d8c4b477ff7552087a7528ea03e1762470b021e831c99971fa8d2a4391cdac6f" HandleID="k8s-pod-network.d8c4b477ff7552087a7528ea03e1762470b021e831c99971fa8d2a4391cdac6f" Workload="ci--3510.3.8--n--34c19deec5-k8s-coredns--7c65d6cfc9--5pdmz-eth0" Sep 6 01:23:11.186919 env[1586]: 2025-09-06 01:23:11.169 [INFO][5478] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 01:23:11.186919 env[1586]: 2025-09-06 01:23:11.169 [INFO][5478] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 01:23:11.186919 env[1586]: 2025-09-06 01:23:11.177 [WARNING][5478] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d8c4b477ff7552087a7528ea03e1762470b021e831c99971fa8d2a4391cdac6f" HandleID="k8s-pod-network.d8c4b477ff7552087a7528ea03e1762470b021e831c99971fa8d2a4391cdac6f" Workload="ci--3510.3.8--n--34c19deec5-k8s-coredns--7c65d6cfc9--5pdmz-eth0" Sep 6 01:23:11.186919 env[1586]: 2025-09-06 01:23:11.177 [INFO][5478] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d8c4b477ff7552087a7528ea03e1762470b021e831c99971fa8d2a4391cdac6f" HandleID="k8s-pod-network.d8c4b477ff7552087a7528ea03e1762470b021e831c99971fa8d2a4391cdac6f" Workload="ci--3510.3.8--n--34c19deec5-k8s-coredns--7c65d6cfc9--5pdmz-eth0" Sep 6 01:23:11.186919 env[1586]: 2025-09-06 01:23:11.179 [INFO][5478] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 01:23:11.186919 env[1586]: 2025-09-06 01:23:11.180 [INFO][5471] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d8c4b477ff7552087a7528ea03e1762470b021e831c99971fa8d2a4391cdac6f" Sep 6 01:23:11.186919 env[1586]: time="2025-09-06T01:23:11.186087804Z" level=info msg="TearDown network for sandbox \"d8c4b477ff7552087a7528ea03e1762470b021e831c99971fa8d2a4391cdac6f\" successfully" Sep 6 01:23:11.186919 env[1586]: time="2025-09-06T01:23:11.186119284Z" level=info msg="StopPodSandbox for \"d8c4b477ff7552087a7528ea03e1762470b021e831c99971fa8d2a4391cdac6f\" returns successfully" Sep 6 01:23:11.190295 env[1586]: time="2025-09-06T01:23:11.188444765Z" level=info msg="RemovePodSandbox for \"d8c4b477ff7552087a7528ea03e1762470b021e831c99971fa8d2a4391cdac6f\"" Sep 6 01:23:11.190295 env[1586]: time="2025-09-06T01:23:11.188478085Z" level=info msg="Forcibly stopping sandbox \"d8c4b477ff7552087a7528ea03e1762470b021e831c99971fa8d2a4391cdac6f\"" Sep 6 01:23:11.289305 env[1586]: 2025-09-06 01:23:11.234 [WARNING][5493] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d8c4b477ff7552087a7528ea03e1762470b021e831c99971fa8d2a4391cdac6f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--34c19deec5-k8s-coredns--7c65d6cfc9--5pdmz-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a84708b2-35a4-42bb-8dac-86ea0f5ddee1", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 1, 22, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-34c19deec5", ContainerID:"ccdc294e4a7e76e49c7b30ac6ed0e03f0452ca56f01b966a366c99793249fb79", Pod:"coredns-7c65d6cfc9-5pdmz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie8b34c027fc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 01:23:11.289305 env[1586]: 2025-09-06 01:23:11.234 [INFO][5493] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d8c4b477ff7552087a7528ea03e1762470b021e831c99971fa8d2a4391cdac6f" Sep 6 01:23:11.289305 env[1586]: 2025-09-06 01:23:11.234 [INFO][5493] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d8c4b477ff7552087a7528ea03e1762470b021e831c99971fa8d2a4391cdac6f" iface="eth0" netns="" Sep 6 01:23:11.289305 env[1586]: 2025-09-06 01:23:11.234 [INFO][5493] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d8c4b477ff7552087a7528ea03e1762470b021e831c99971fa8d2a4391cdac6f" Sep 6 01:23:11.289305 env[1586]: 2025-09-06 01:23:11.234 [INFO][5493] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d8c4b477ff7552087a7528ea03e1762470b021e831c99971fa8d2a4391cdac6f" Sep 6 01:23:11.289305 env[1586]: 2025-09-06 01:23:11.275 [INFO][5500] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d8c4b477ff7552087a7528ea03e1762470b021e831c99971fa8d2a4391cdac6f" HandleID="k8s-pod-network.d8c4b477ff7552087a7528ea03e1762470b021e831c99971fa8d2a4391cdac6f" Workload="ci--3510.3.8--n--34c19deec5-k8s-coredns--7c65d6cfc9--5pdmz-eth0" Sep 6 01:23:11.289305 env[1586]: 2025-09-06 01:23:11.276 [INFO][5500] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 01:23:11.289305 env[1586]: 2025-09-06 01:23:11.276 [INFO][5500] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 01:23:11.289305 env[1586]: 2025-09-06 01:23:11.285 [WARNING][5500] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d8c4b477ff7552087a7528ea03e1762470b021e831c99971fa8d2a4391cdac6f" HandleID="k8s-pod-network.d8c4b477ff7552087a7528ea03e1762470b021e831c99971fa8d2a4391cdac6f" Workload="ci--3510.3.8--n--34c19deec5-k8s-coredns--7c65d6cfc9--5pdmz-eth0" Sep 6 01:23:11.289305 env[1586]: 2025-09-06 01:23:11.285 [INFO][5500] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d8c4b477ff7552087a7528ea03e1762470b021e831c99971fa8d2a4391cdac6f" HandleID="k8s-pod-network.d8c4b477ff7552087a7528ea03e1762470b021e831c99971fa8d2a4391cdac6f" Workload="ci--3510.3.8--n--34c19deec5-k8s-coredns--7c65d6cfc9--5pdmz-eth0" Sep 6 01:23:11.289305 env[1586]: 2025-09-06 01:23:11.286 [INFO][5500] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 01:23:11.289305 env[1586]: 2025-09-06 01:23:11.288 [INFO][5493] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d8c4b477ff7552087a7528ea03e1762470b021e831c99971fa8d2a4391cdac6f" Sep 6 01:23:11.289847 env[1586]: time="2025-09-06T01:23:11.289815212Z" level=info msg="TearDown network for sandbox \"d8c4b477ff7552087a7528ea03e1762470b021e831c99971fa8d2a4391cdac6f\" successfully" Sep 6 01:23:11.302772 env[1586]: time="2025-09-06T01:23:11.302728938Z" level=info msg="RemovePodSandbox \"d8c4b477ff7552087a7528ea03e1762470b021e831c99971fa8d2a4391cdac6f\" returns successfully" Sep 6 01:23:11.303404 env[1586]: time="2025-09-06T01:23:11.303378178Z" level=info msg="StopPodSandbox for \"8d640dc67cd9dc8e43167eab759ff83b7ac1cbdb35781beeef948e46087dd270\"" Sep 6 01:23:11.382380 env[1586]: 2025-09-06 01:23:11.345 [WARNING][5516] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8d640dc67cd9dc8e43167eab759ff83b7ac1cbdb35781beeef948e46087dd270" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--34c19deec5-k8s-calico--apiserver--56cb94fc6--8w5rf-eth0", GenerateName:"calico-apiserver-56cb94fc6-", Namespace:"calico-apiserver", SelfLink:"", UID:"f5d38221-ee04-42c6-b763-9c6f6f204114", ResourceVersion:"1052", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 1, 22, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56cb94fc6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-34c19deec5", ContainerID:"5ff95b7253718709f7058e0fc2bf64384c91a266951c5b7568c976fb5d1ca580", Pod:"calico-apiserver-56cb94fc6-8w5rf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie261c37c714", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 01:23:11.382380 env[1586]: 2025-09-06 01:23:11.345 [INFO][5516] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8d640dc67cd9dc8e43167eab759ff83b7ac1cbdb35781beeef948e46087dd270" Sep 6 01:23:11.382380 env[1586]: 2025-09-06 01:23:11.345 [INFO][5516] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8d640dc67cd9dc8e43167eab759ff83b7ac1cbdb35781beeef948e46087dd270" iface="eth0" netns="" Sep 6 01:23:11.382380 env[1586]: 2025-09-06 01:23:11.345 [INFO][5516] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8d640dc67cd9dc8e43167eab759ff83b7ac1cbdb35781beeef948e46087dd270" Sep 6 01:23:11.382380 env[1586]: 2025-09-06 01:23:11.345 [INFO][5516] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8d640dc67cd9dc8e43167eab759ff83b7ac1cbdb35781beeef948e46087dd270" Sep 6 01:23:11.382380 env[1586]: 2025-09-06 01:23:11.366 [INFO][5523] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8d640dc67cd9dc8e43167eab759ff83b7ac1cbdb35781beeef948e46087dd270" HandleID="k8s-pod-network.8d640dc67cd9dc8e43167eab759ff83b7ac1cbdb35781beeef948e46087dd270" Workload="ci--3510.3.8--n--34c19deec5-k8s-calico--apiserver--56cb94fc6--8w5rf-eth0" Sep 6 01:23:11.382380 env[1586]: 2025-09-06 01:23:11.366 [INFO][5523] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 01:23:11.382380 env[1586]: 2025-09-06 01:23:11.367 [INFO][5523] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 01:23:11.382380 env[1586]: 2025-09-06 01:23:11.378 [WARNING][5523] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8d640dc67cd9dc8e43167eab759ff83b7ac1cbdb35781beeef948e46087dd270" HandleID="k8s-pod-network.8d640dc67cd9dc8e43167eab759ff83b7ac1cbdb35781beeef948e46087dd270" Workload="ci--3510.3.8--n--34c19deec5-k8s-calico--apiserver--56cb94fc6--8w5rf-eth0" Sep 6 01:23:11.382380 env[1586]: 2025-09-06 01:23:11.378 [INFO][5523] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8d640dc67cd9dc8e43167eab759ff83b7ac1cbdb35781beeef948e46087dd270" HandleID="k8s-pod-network.8d640dc67cd9dc8e43167eab759ff83b7ac1cbdb35781beeef948e46087dd270" Workload="ci--3510.3.8--n--34c19deec5-k8s-calico--apiserver--56cb94fc6--8w5rf-eth0" Sep 6 01:23:11.382380 env[1586]: 2025-09-06 01:23:11.379 [INFO][5523] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 01:23:11.382380 env[1586]: 2025-09-06 01:23:11.381 [INFO][5516] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8d640dc67cd9dc8e43167eab759ff83b7ac1cbdb35781beeef948e46087dd270" Sep 6 01:23:11.382892 env[1586]: time="2025-09-06T01:23:11.382858375Z" level=info msg="TearDown network for sandbox \"8d640dc67cd9dc8e43167eab759ff83b7ac1cbdb35781beeef948e46087dd270\" successfully" Sep 6 01:23:11.382963 env[1586]: time="2025-09-06T01:23:11.382947535Z" level=info msg="StopPodSandbox for \"8d640dc67cd9dc8e43167eab759ff83b7ac1cbdb35781beeef948e46087dd270\" returns successfully" Sep 6 01:23:11.388690 env[1586]: time="2025-09-06T01:23:11.388660538Z" level=info msg="RemovePodSandbox for \"8d640dc67cd9dc8e43167eab759ff83b7ac1cbdb35781beeef948e46087dd270\"" Sep 6 01:23:11.388995 env[1586]: time="2025-09-06T01:23:11.388956218Z" level=info msg="Forcibly stopping sandbox \"8d640dc67cd9dc8e43167eab759ff83b7ac1cbdb35781beeef948e46087dd270\"" Sep 6 01:23:11.512392 env[1586]: 2025-09-06 01:23:11.476 [WARNING][5537] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8d640dc67cd9dc8e43167eab759ff83b7ac1cbdb35781beeef948e46087dd270" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--34c19deec5-k8s-calico--apiserver--56cb94fc6--8w5rf-eth0", GenerateName:"calico-apiserver-56cb94fc6-", Namespace:"calico-apiserver", SelfLink:"", UID:"f5d38221-ee04-42c6-b763-9c6f6f204114", ResourceVersion:"1052", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 1, 22, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56cb94fc6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-34c19deec5", ContainerID:"5ff95b7253718709f7058e0fc2bf64384c91a266951c5b7568c976fb5d1ca580", Pod:"calico-apiserver-56cb94fc6-8w5rf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie261c37c714", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 01:23:11.512392 env[1586]: 2025-09-06 01:23:11.476 [INFO][5537] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8d640dc67cd9dc8e43167eab759ff83b7ac1cbdb35781beeef948e46087dd270" Sep 6 01:23:11.512392 env[1586]: 2025-09-06 01:23:11.476 [INFO][5537] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8d640dc67cd9dc8e43167eab759ff83b7ac1cbdb35781beeef948e46087dd270" iface="eth0" netns="" Sep 6 01:23:11.512392 env[1586]: 2025-09-06 01:23:11.476 [INFO][5537] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8d640dc67cd9dc8e43167eab759ff83b7ac1cbdb35781beeef948e46087dd270" Sep 6 01:23:11.512392 env[1586]: 2025-09-06 01:23:11.476 [INFO][5537] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8d640dc67cd9dc8e43167eab759ff83b7ac1cbdb35781beeef948e46087dd270" Sep 6 01:23:11.512392 env[1586]: 2025-09-06 01:23:11.500 [INFO][5545] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8d640dc67cd9dc8e43167eab759ff83b7ac1cbdb35781beeef948e46087dd270" HandleID="k8s-pod-network.8d640dc67cd9dc8e43167eab759ff83b7ac1cbdb35781beeef948e46087dd270" Workload="ci--3510.3.8--n--34c19deec5-k8s-calico--apiserver--56cb94fc6--8w5rf-eth0" Sep 6 01:23:11.512392 env[1586]: 2025-09-06 01:23:11.500 [INFO][5545] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 01:23:11.512392 env[1586]: 2025-09-06 01:23:11.501 [INFO][5545] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 01:23:11.512392 env[1586]: 2025-09-06 01:23:11.509 [WARNING][5545] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8d640dc67cd9dc8e43167eab759ff83b7ac1cbdb35781beeef948e46087dd270" HandleID="k8s-pod-network.8d640dc67cd9dc8e43167eab759ff83b7ac1cbdb35781beeef948e46087dd270" Workload="ci--3510.3.8--n--34c19deec5-k8s-calico--apiserver--56cb94fc6--8w5rf-eth0" Sep 6 01:23:11.512392 env[1586]: 2025-09-06 01:23:11.509 [INFO][5545] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8d640dc67cd9dc8e43167eab759ff83b7ac1cbdb35781beeef948e46087dd270" HandleID="k8s-pod-network.8d640dc67cd9dc8e43167eab759ff83b7ac1cbdb35781beeef948e46087dd270" Workload="ci--3510.3.8--n--34c19deec5-k8s-calico--apiserver--56cb94fc6--8w5rf-eth0" Sep 6 01:23:11.512392 env[1586]: 2025-09-06 01:23:11.510 [INFO][5545] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 01:23:11.512392 env[1586]: 2025-09-06 01:23:11.511 [INFO][5537] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8d640dc67cd9dc8e43167eab759ff83b7ac1cbdb35781beeef948e46087dd270" Sep 6 01:23:11.513105 env[1586]: time="2025-09-06T01:23:11.513072795Z" level=info msg="TearDown network for sandbox \"8d640dc67cd9dc8e43167eab759ff83b7ac1cbdb35781beeef948e46087dd270\" successfully" Sep 6 01:23:11.528194 env[1586]: time="2025-09-06T01:23:11.528154362Z" level=info msg="RemovePodSandbox \"8d640dc67cd9dc8e43167eab759ff83b7ac1cbdb35781beeef948e46087dd270\" returns successfully" Sep 6 01:23:11.528811 env[1586]: time="2025-09-06T01:23:11.528781483Z" level=info msg="StopPodSandbox for \"fb040c74039d975a5b72acbb13b204bfc8ce6207f7a737662699734322391953\"" Sep 6 01:23:11.600179 env[1586]: 2025-09-06 01:23:11.565 [WARNING][5560] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fb040c74039d975a5b72acbb13b204bfc8ce6207f7a737662699734322391953" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--34c19deec5-k8s-csi--node--driver--r5mms-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d9edb798-4304-4b76-a60a-df0eaa0d87c0", ResourceVersion:"1065", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 1, 22, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-34c19deec5", ContainerID:"2218e28adb1761d6e0216ffabd2b8b9ac70c51b19e318aeda2a3debb31a235cc", Pod:"csi-node-driver-r5mms", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.61.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali143458d2057", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 01:23:11.600179 env[1586]: 2025-09-06 01:23:11.565 [INFO][5560] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fb040c74039d975a5b72acbb13b204bfc8ce6207f7a737662699734322391953" Sep 6 01:23:11.600179 env[1586]: 2025-09-06 01:23:11.565 [INFO][5560] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fb040c74039d975a5b72acbb13b204bfc8ce6207f7a737662699734322391953" iface="eth0" netns="" Sep 6 01:23:11.600179 env[1586]: 2025-09-06 01:23:11.565 [INFO][5560] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fb040c74039d975a5b72acbb13b204bfc8ce6207f7a737662699734322391953" Sep 6 01:23:11.600179 env[1586]: 2025-09-06 01:23:11.565 [INFO][5560] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fb040c74039d975a5b72acbb13b204bfc8ce6207f7a737662699734322391953" Sep 6 01:23:11.600179 env[1586]: 2025-09-06 01:23:11.586 [INFO][5567] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fb040c74039d975a5b72acbb13b204bfc8ce6207f7a737662699734322391953" HandleID="k8s-pod-network.fb040c74039d975a5b72acbb13b204bfc8ce6207f7a737662699734322391953" Workload="ci--3510.3.8--n--34c19deec5-k8s-csi--node--driver--r5mms-eth0" Sep 6 01:23:11.600179 env[1586]: 2025-09-06 01:23:11.587 [INFO][5567] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 01:23:11.600179 env[1586]: 2025-09-06 01:23:11.587 [INFO][5567] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 01:23:11.600179 env[1586]: 2025-09-06 01:23:11.595 [WARNING][5567] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fb040c74039d975a5b72acbb13b204bfc8ce6207f7a737662699734322391953" HandleID="k8s-pod-network.fb040c74039d975a5b72acbb13b204bfc8ce6207f7a737662699734322391953" Workload="ci--3510.3.8--n--34c19deec5-k8s-csi--node--driver--r5mms-eth0" Sep 6 01:23:11.600179 env[1586]: 2025-09-06 01:23:11.595 [INFO][5567] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fb040c74039d975a5b72acbb13b204bfc8ce6207f7a737662699734322391953" HandleID="k8s-pod-network.fb040c74039d975a5b72acbb13b204bfc8ce6207f7a737662699734322391953" Workload="ci--3510.3.8--n--34c19deec5-k8s-csi--node--driver--r5mms-eth0" Sep 6 01:23:11.600179 env[1586]: 2025-09-06 01:23:11.597 [INFO][5567] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 01:23:11.600179 env[1586]: 2025-09-06 01:23:11.598 [INFO][5560] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fb040c74039d975a5b72acbb13b204bfc8ce6207f7a737662699734322391953" Sep 6 01:23:11.600632 env[1586]: time="2025-09-06T01:23:11.600210196Z" level=info msg="TearDown network for sandbox \"fb040c74039d975a5b72acbb13b204bfc8ce6207f7a737662699734322391953\" successfully" Sep 6 01:23:11.600632 env[1586]: time="2025-09-06T01:23:11.600259396Z" level=info msg="StopPodSandbox for \"fb040c74039d975a5b72acbb13b204bfc8ce6207f7a737662699734322391953\" returns successfully" Sep 6 01:23:11.601110 env[1586]: time="2025-09-06T01:23:11.601084356Z" level=info msg="RemovePodSandbox for \"fb040c74039d975a5b72acbb13b204bfc8ce6207f7a737662699734322391953\"" Sep 6 01:23:11.601178 env[1586]: time="2025-09-06T01:23:11.601114476Z" level=info msg="Forcibly stopping sandbox \"fb040c74039d975a5b72acbb13b204bfc8ce6207f7a737662699734322391953\"" Sep 6 01:23:11.714215 env[1586]: 2025-09-06 01:23:11.655 [WARNING][5582] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fb040c74039d975a5b72acbb13b204bfc8ce6207f7a737662699734322391953" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--34c19deec5-k8s-csi--node--driver--r5mms-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d9edb798-4304-4b76-a60a-df0eaa0d87c0", ResourceVersion:"1065", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 1, 22, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-34c19deec5", ContainerID:"2218e28adb1761d6e0216ffabd2b8b9ac70c51b19e318aeda2a3debb31a235cc", Pod:"csi-node-driver-r5mms", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.61.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali143458d2057", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 01:23:11.714215 env[1586]: 2025-09-06 01:23:11.656 [INFO][5582] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fb040c74039d975a5b72acbb13b204bfc8ce6207f7a737662699734322391953" Sep 6 01:23:11.714215 env[1586]: 2025-09-06 01:23:11.656 [INFO][5582] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fb040c74039d975a5b72acbb13b204bfc8ce6207f7a737662699734322391953" iface="eth0" netns="" Sep 6 01:23:11.714215 env[1586]: 2025-09-06 01:23:11.656 [INFO][5582] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fb040c74039d975a5b72acbb13b204bfc8ce6207f7a737662699734322391953" Sep 6 01:23:11.714215 env[1586]: 2025-09-06 01:23:11.656 [INFO][5582] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fb040c74039d975a5b72acbb13b204bfc8ce6207f7a737662699734322391953" Sep 6 01:23:11.714215 env[1586]: 2025-09-06 01:23:11.694 [INFO][5589] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fb040c74039d975a5b72acbb13b204bfc8ce6207f7a737662699734322391953" HandleID="k8s-pod-network.fb040c74039d975a5b72acbb13b204bfc8ce6207f7a737662699734322391953" Workload="ci--3510.3.8--n--34c19deec5-k8s-csi--node--driver--r5mms-eth0" Sep 6 01:23:11.714215 env[1586]: 2025-09-06 01:23:11.694 [INFO][5589] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 01:23:11.714215 env[1586]: 2025-09-06 01:23:11.694 [INFO][5589] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 01:23:11.714215 env[1586]: 2025-09-06 01:23:11.710 [WARNING][5589] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fb040c74039d975a5b72acbb13b204bfc8ce6207f7a737662699734322391953" HandleID="k8s-pod-network.fb040c74039d975a5b72acbb13b204bfc8ce6207f7a737662699734322391953" Workload="ci--3510.3.8--n--34c19deec5-k8s-csi--node--driver--r5mms-eth0" Sep 6 01:23:11.714215 env[1586]: 2025-09-06 01:23:11.710 [INFO][5589] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fb040c74039d975a5b72acbb13b204bfc8ce6207f7a737662699734322391953" HandleID="k8s-pod-network.fb040c74039d975a5b72acbb13b204bfc8ce6207f7a737662699734322391953" Workload="ci--3510.3.8--n--34c19deec5-k8s-csi--node--driver--r5mms-eth0" Sep 6 01:23:11.714215 env[1586]: 2025-09-06 01:23:11.711 [INFO][5589] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 01:23:11.714215 env[1586]: 2025-09-06 01:23:11.712 [INFO][5582] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fb040c74039d975a5b72acbb13b204bfc8ce6207f7a737662699734322391953" Sep 6 01:23:11.714666 env[1586]: time="2025-09-06T01:23:11.714262489Z" level=info msg="TearDown network for sandbox \"fb040c74039d975a5b72acbb13b204bfc8ce6207f7a737662699734322391953\" successfully" Sep 6 01:23:11.729058 env[1586]: time="2025-09-06T01:23:11.729013815Z" level=info msg="RemovePodSandbox \"fb040c74039d975a5b72acbb13b204bfc8ce6207f7a737662699734322391953\" returns successfully" Sep 6 01:23:11.729512 env[1586]: time="2025-09-06T01:23:11.729490776Z" level=info msg="StopPodSandbox for \"f7907bc0909e4c74f245d687f3dff3e8ae6fb2038ccd21518dfaab8b7a11cbbd\"" Sep 6 01:23:11.814494 env[1586]: 2025-09-06 01:23:11.769 [WARNING][5603] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f7907bc0909e4c74f245d687f3dff3e8ae6fb2038ccd21518dfaab8b7a11cbbd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--34c19deec5-k8s-calico--kube--controllers--9fd44b5cd--ggmhh-eth0", GenerateName:"calico-kube-controllers-9fd44b5cd-", Namespace:"calico-system", SelfLink:"", UID:"f4c05567-6639-4b0f-94fa-e71542024f21", ResourceVersion:"1085", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 1, 22, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9fd44b5cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-34c19deec5", ContainerID:"19c564915bda609549386b4e50512df3de3138779a03d7052fe61eff03eb6e22", Pod:"calico-kube-controllers-9fd44b5cd-ggmhh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.61.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib6453dd5dc0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 01:23:11.814494 env[1586]: 2025-09-06 01:23:11.769 [INFO][5603] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f7907bc0909e4c74f245d687f3dff3e8ae6fb2038ccd21518dfaab8b7a11cbbd" Sep 6 01:23:11.814494 env[1586]: 2025-09-06 01:23:11.769 [INFO][5603] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f7907bc0909e4c74f245d687f3dff3e8ae6fb2038ccd21518dfaab8b7a11cbbd" iface="eth0" netns="" Sep 6 01:23:11.814494 env[1586]: 2025-09-06 01:23:11.769 [INFO][5603] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f7907bc0909e4c74f245d687f3dff3e8ae6fb2038ccd21518dfaab8b7a11cbbd" Sep 6 01:23:11.814494 env[1586]: 2025-09-06 01:23:11.769 [INFO][5603] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f7907bc0909e4c74f245d687f3dff3e8ae6fb2038ccd21518dfaab8b7a11cbbd" Sep 6 01:23:11.814494 env[1586]: 2025-09-06 01:23:11.787 [INFO][5610] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f7907bc0909e4c74f245d687f3dff3e8ae6fb2038ccd21518dfaab8b7a11cbbd" HandleID="k8s-pod-network.f7907bc0909e4c74f245d687f3dff3e8ae6fb2038ccd21518dfaab8b7a11cbbd" Workload="ci--3510.3.8--n--34c19deec5-k8s-calico--kube--controllers--9fd44b5cd--ggmhh-eth0" Sep 6 01:23:11.814494 env[1586]: 2025-09-06 01:23:11.787 [INFO][5610] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 01:23:11.814494 env[1586]: 2025-09-06 01:23:11.787 [INFO][5610] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 01:23:11.814494 env[1586]: 2025-09-06 01:23:11.798 [WARNING][5610] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f7907bc0909e4c74f245d687f3dff3e8ae6fb2038ccd21518dfaab8b7a11cbbd" HandleID="k8s-pod-network.f7907bc0909e4c74f245d687f3dff3e8ae6fb2038ccd21518dfaab8b7a11cbbd" Workload="ci--3510.3.8--n--34c19deec5-k8s-calico--kube--controllers--9fd44b5cd--ggmhh-eth0" Sep 6 01:23:11.814494 env[1586]: 2025-09-06 01:23:11.798 [INFO][5610] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f7907bc0909e4c74f245d687f3dff3e8ae6fb2038ccd21518dfaab8b7a11cbbd" HandleID="k8s-pod-network.f7907bc0909e4c74f245d687f3dff3e8ae6fb2038ccd21518dfaab8b7a11cbbd" Workload="ci--3510.3.8--n--34c19deec5-k8s-calico--kube--controllers--9fd44b5cd--ggmhh-eth0" Sep 6 01:23:11.814494 env[1586]: 2025-09-06 01:23:11.802 [INFO][5610] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 01:23:11.814494 env[1586]: 2025-09-06 01:23:11.810 [INFO][5603] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f7907bc0909e4c74f245d687f3dff3e8ae6fb2038ccd21518dfaab8b7a11cbbd" Sep 6 01:23:11.815029 env[1586]: time="2025-09-06T01:23:11.814995415Z" level=info msg="TearDown network for sandbox \"f7907bc0909e4c74f245d687f3dff3e8ae6fb2038ccd21518dfaab8b7a11cbbd\" successfully" Sep 6 01:23:11.815094 env[1586]: time="2025-09-06T01:23:11.815079535Z" level=info msg="StopPodSandbox for \"f7907bc0909e4c74f245d687f3dff3e8ae6fb2038ccd21518dfaab8b7a11cbbd\" returns successfully" Sep 6 01:23:11.816151 env[1586]: time="2025-09-06T01:23:11.816119816Z" level=info msg="RemovePodSandbox for \"f7907bc0909e4c74f245d687f3dff3e8ae6fb2038ccd21518dfaab8b7a11cbbd\"" Sep 6 01:23:11.816272 env[1586]: time="2025-09-06T01:23:11.816152736Z" level=info msg="Forcibly stopping sandbox \"f7907bc0909e4c74f245d687f3dff3e8ae6fb2038ccd21518dfaab8b7a11cbbd\"" Sep 6 01:23:11.906602 env[1586]: 2025-09-06 01:23:11.858 [WARNING][5627] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f7907bc0909e4c74f245d687f3dff3e8ae6fb2038ccd21518dfaab8b7a11cbbd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--34c19deec5-k8s-calico--kube--controllers--9fd44b5cd--ggmhh-eth0", GenerateName:"calico-kube-controllers-9fd44b5cd-", Namespace:"calico-system", SelfLink:"", UID:"f4c05567-6639-4b0f-94fa-e71542024f21", ResourceVersion:"1085", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 1, 22, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9fd44b5cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-34c19deec5", ContainerID:"19c564915bda609549386b4e50512df3de3138779a03d7052fe61eff03eb6e22", Pod:"calico-kube-controllers-9fd44b5cd-ggmhh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.61.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib6453dd5dc0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 01:23:11.906602 env[1586]: 2025-09-06 01:23:11.858 [INFO][5627] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f7907bc0909e4c74f245d687f3dff3e8ae6fb2038ccd21518dfaab8b7a11cbbd" Sep 6 01:23:11.906602 env[1586]: 2025-09-06 01:23:11.858 [INFO][5627] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f7907bc0909e4c74f245d687f3dff3e8ae6fb2038ccd21518dfaab8b7a11cbbd" iface="eth0" netns="" Sep 6 01:23:11.906602 env[1586]: 2025-09-06 01:23:11.858 [INFO][5627] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f7907bc0909e4c74f245d687f3dff3e8ae6fb2038ccd21518dfaab8b7a11cbbd" Sep 6 01:23:11.906602 env[1586]: 2025-09-06 01:23:11.858 [INFO][5627] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f7907bc0909e4c74f245d687f3dff3e8ae6fb2038ccd21518dfaab8b7a11cbbd" Sep 6 01:23:11.906602 env[1586]: 2025-09-06 01:23:11.890 [INFO][5634] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f7907bc0909e4c74f245d687f3dff3e8ae6fb2038ccd21518dfaab8b7a11cbbd" HandleID="k8s-pod-network.f7907bc0909e4c74f245d687f3dff3e8ae6fb2038ccd21518dfaab8b7a11cbbd" Workload="ci--3510.3.8--n--34c19deec5-k8s-calico--kube--controllers--9fd44b5cd--ggmhh-eth0" Sep 6 01:23:11.906602 env[1586]: 2025-09-06 01:23:11.890 [INFO][5634] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 01:23:11.906602 env[1586]: 2025-09-06 01:23:11.890 [INFO][5634] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 01:23:11.906602 env[1586]: 2025-09-06 01:23:11.902 [WARNING][5634] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f7907bc0909e4c74f245d687f3dff3e8ae6fb2038ccd21518dfaab8b7a11cbbd" HandleID="k8s-pod-network.f7907bc0909e4c74f245d687f3dff3e8ae6fb2038ccd21518dfaab8b7a11cbbd" Workload="ci--3510.3.8--n--34c19deec5-k8s-calico--kube--controllers--9fd44b5cd--ggmhh-eth0" Sep 6 01:23:11.906602 env[1586]: 2025-09-06 01:23:11.902 [INFO][5634] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f7907bc0909e4c74f245d687f3dff3e8ae6fb2038ccd21518dfaab8b7a11cbbd" HandleID="k8s-pod-network.f7907bc0909e4c74f245d687f3dff3e8ae6fb2038ccd21518dfaab8b7a11cbbd" Workload="ci--3510.3.8--n--34c19deec5-k8s-calico--kube--controllers--9fd44b5cd--ggmhh-eth0" Sep 6 01:23:11.906602 env[1586]: 2025-09-06 01:23:11.903 [INFO][5634] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 01:23:11.906602 env[1586]: 2025-09-06 01:23:11.905 [INFO][5627] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f7907bc0909e4c74f245d687f3dff3e8ae6fb2038ccd21518dfaab8b7a11cbbd" Sep 6 01:23:11.907007 env[1586]: time="2025-09-06T01:23:11.906625858Z" level=info msg="TearDown network for sandbox \"f7907bc0909e4c74f245d687f3dff3e8ae6fb2038ccd21518dfaab8b7a11cbbd\" successfully" Sep 6 01:23:11.921104 env[1586]: time="2025-09-06T01:23:11.921042625Z" level=info msg="RemovePodSandbox \"f7907bc0909e4c74f245d687f3dff3e8ae6fb2038ccd21518dfaab8b7a11cbbd\" returns successfully" Sep 6 01:23:11.921587 env[1586]: time="2025-09-06T01:23:11.921554305Z" level=info msg="StopPodSandbox for \"ae2ef6667ff1b4ef29467318ee88ac32ab3d404e005c1556f2035af5a33aad64\"" Sep 6 01:23:12.017406 env[1586]: 2025-09-06 01:23:11.976 [WARNING][5653] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ae2ef6667ff1b4ef29467318ee88ac32ab3d404e005c1556f2035af5a33aad64" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--34c19deec5-k8s-calico--apiserver--56cb94fc6--989l7-eth0", GenerateName:"calico-apiserver-56cb94fc6-", Namespace:"calico-apiserver", SelfLink:"", UID:"75a2fc54-827a-4281-a019-abda9f06779c", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 1, 22, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56cb94fc6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-34c19deec5", ContainerID:"eac5f78f7da79a935ba3f5428f8339e0d7b746dae03c537582045dede2c932f8", Pod:"calico-apiserver-56cb94fc6-989l7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2a7f55f8928", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 01:23:12.017406 env[1586]: 2025-09-06 01:23:11.976 [INFO][5653] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ae2ef6667ff1b4ef29467318ee88ac32ab3d404e005c1556f2035af5a33aad64" Sep 6 01:23:12.017406 env[1586]: 2025-09-06 01:23:11.976 [INFO][5653] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ae2ef6667ff1b4ef29467318ee88ac32ab3d404e005c1556f2035af5a33aad64" iface="eth0" netns="" Sep 6 01:23:12.017406 env[1586]: 2025-09-06 01:23:11.976 [INFO][5653] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ae2ef6667ff1b4ef29467318ee88ac32ab3d404e005c1556f2035af5a33aad64" Sep 6 01:23:12.017406 env[1586]: 2025-09-06 01:23:11.977 [INFO][5653] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ae2ef6667ff1b4ef29467318ee88ac32ab3d404e005c1556f2035af5a33aad64" Sep 6 01:23:12.017406 env[1586]: 2025-09-06 01:23:12.003 [INFO][5661] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ae2ef6667ff1b4ef29467318ee88ac32ab3d404e005c1556f2035af5a33aad64" HandleID="k8s-pod-network.ae2ef6667ff1b4ef29467318ee88ac32ab3d404e005c1556f2035af5a33aad64" Workload="ci--3510.3.8--n--34c19deec5-k8s-calico--apiserver--56cb94fc6--989l7-eth0" Sep 6 01:23:12.017406 env[1586]: 2025-09-06 01:23:12.003 [INFO][5661] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 01:23:12.017406 env[1586]: 2025-09-06 01:23:12.004 [INFO][5661] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 01:23:12.017406 env[1586]: 2025-09-06 01:23:12.013 [WARNING][5661] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ae2ef6667ff1b4ef29467318ee88ac32ab3d404e005c1556f2035af5a33aad64" HandleID="k8s-pod-network.ae2ef6667ff1b4ef29467318ee88ac32ab3d404e005c1556f2035af5a33aad64" Workload="ci--3510.3.8--n--34c19deec5-k8s-calico--apiserver--56cb94fc6--989l7-eth0" Sep 6 01:23:12.017406 env[1586]: 2025-09-06 01:23:12.013 [INFO][5661] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ae2ef6667ff1b4ef29467318ee88ac32ab3d404e005c1556f2035af5a33aad64" HandleID="k8s-pod-network.ae2ef6667ff1b4ef29467318ee88ac32ab3d404e005c1556f2035af5a33aad64" Workload="ci--3510.3.8--n--34c19deec5-k8s-calico--apiserver--56cb94fc6--989l7-eth0" Sep 6 01:23:12.017406 env[1586]: 2025-09-06 01:23:12.014 [INFO][5661] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 01:23:12.017406 env[1586]: 2025-09-06 01:23:12.016 [INFO][5653] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ae2ef6667ff1b4ef29467318ee88ac32ab3d404e005c1556f2035af5a33aad64" Sep 6 01:23:12.017809 env[1586]: time="2025-09-06T01:23:12.017423629Z" level=info msg="TearDown network for sandbox \"ae2ef6667ff1b4ef29467318ee88ac32ab3d404e005c1556f2035af5a33aad64\" successfully" Sep 6 01:23:12.017809 env[1586]: time="2025-09-06T01:23:12.017453229Z" level=info msg="StopPodSandbox for \"ae2ef6667ff1b4ef29467318ee88ac32ab3d404e005c1556f2035af5a33aad64\" returns successfully" Sep 6 01:23:12.017949 env[1586]: time="2025-09-06T01:23:12.017915509Z" level=info msg="RemovePodSandbox for \"ae2ef6667ff1b4ef29467318ee88ac32ab3d404e005c1556f2035af5a33aad64\"" Sep 6 01:23:12.017995 env[1586]: time="2025-09-06T01:23:12.017950309Z" level=info msg="Forcibly stopping sandbox \"ae2ef6667ff1b4ef29467318ee88ac32ab3d404e005c1556f2035af5a33aad64\"" Sep 6 01:23:12.104602 env[1586]: 2025-09-06 01:23:12.062 [WARNING][5676] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ae2ef6667ff1b4ef29467318ee88ac32ab3d404e005c1556f2035af5a33aad64" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--34c19deec5-k8s-calico--apiserver--56cb94fc6--989l7-eth0", GenerateName:"calico-apiserver-56cb94fc6-", Namespace:"calico-apiserver", SelfLink:"", UID:"75a2fc54-827a-4281-a019-abda9f06779c", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 1, 22, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56cb94fc6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-34c19deec5", ContainerID:"eac5f78f7da79a935ba3f5428f8339e0d7b746dae03c537582045dede2c932f8", Pod:"calico-apiserver-56cb94fc6-989l7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2a7f55f8928", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 01:23:12.104602 env[1586]: 2025-09-06 01:23:12.062 [INFO][5676] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ae2ef6667ff1b4ef29467318ee88ac32ab3d404e005c1556f2035af5a33aad64" Sep 6 01:23:12.104602 env[1586]: 2025-09-06 01:23:12.062 [INFO][5676] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ae2ef6667ff1b4ef29467318ee88ac32ab3d404e005c1556f2035af5a33aad64" iface="eth0" netns="" Sep 6 01:23:12.104602 env[1586]: 2025-09-06 01:23:12.062 [INFO][5676] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ae2ef6667ff1b4ef29467318ee88ac32ab3d404e005c1556f2035af5a33aad64" Sep 6 01:23:12.104602 env[1586]: 2025-09-06 01:23:12.062 [INFO][5676] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ae2ef6667ff1b4ef29467318ee88ac32ab3d404e005c1556f2035af5a33aad64" Sep 6 01:23:12.104602 env[1586]: 2025-09-06 01:23:12.086 [INFO][5683] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ae2ef6667ff1b4ef29467318ee88ac32ab3d404e005c1556f2035af5a33aad64" HandleID="k8s-pod-network.ae2ef6667ff1b4ef29467318ee88ac32ab3d404e005c1556f2035af5a33aad64" Workload="ci--3510.3.8--n--34c19deec5-k8s-calico--apiserver--56cb94fc6--989l7-eth0" Sep 6 01:23:12.104602 env[1586]: 2025-09-06 01:23:12.086 [INFO][5683] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 01:23:12.104602 env[1586]: 2025-09-06 01:23:12.086 [INFO][5683] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 01:23:12.104602 env[1586]: 2025-09-06 01:23:12.096 [WARNING][5683] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ae2ef6667ff1b4ef29467318ee88ac32ab3d404e005c1556f2035af5a33aad64" HandleID="k8s-pod-network.ae2ef6667ff1b4ef29467318ee88ac32ab3d404e005c1556f2035af5a33aad64" Workload="ci--3510.3.8--n--34c19deec5-k8s-calico--apiserver--56cb94fc6--989l7-eth0" Sep 6 01:23:12.104602 env[1586]: 2025-09-06 01:23:12.096 [INFO][5683] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ae2ef6667ff1b4ef29467318ee88ac32ab3d404e005c1556f2035af5a33aad64" HandleID="k8s-pod-network.ae2ef6667ff1b4ef29467318ee88ac32ab3d404e005c1556f2035af5a33aad64" Workload="ci--3510.3.8--n--34c19deec5-k8s-calico--apiserver--56cb94fc6--989l7-eth0" Sep 6 01:23:12.104602 env[1586]: 2025-09-06 01:23:12.097 [INFO][5683] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 01:23:12.104602 env[1586]: 2025-09-06 01:23:12.101 [INFO][5676] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ae2ef6667ff1b4ef29467318ee88ac32ab3d404e005c1556f2035af5a33aad64" Sep 6 01:23:12.105063 env[1586]: time="2025-09-06T01:23:12.105032230Z" level=info msg="TearDown network for sandbox \"ae2ef6667ff1b4ef29467318ee88ac32ab3d404e005c1556f2035af5a33aad64\" successfully" Sep 6 01:23:12.157958 env[1586]: time="2025-09-06T01:23:12.157893014Z" level=info msg="RemovePodSandbox \"ae2ef6667ff1b4ef29467318ee88ac32ab3d404e005c1556f2035af5a33aad64\" returns successfully" Sep 6 01:23:12.753284 systemd[1]: run-containerd-runc-k8s.io-778cf78c78b5b76721b95a03843c8a4a964e3e704bbc7af7a34a84fdf4c227ba-runc.Bsbmyo.mount: Deactivated successfully. Sep 6 01:23:12.780392 systemd[1]: run-containerd-runc-k8s.io-b08420d6fe328983dd2438f8c261e713497b7420bc85e27786f356502947a85c-runc.ISoc3K.mount: Deactivated successfully. Sep 6 01:23:26.283231 systemd[1]: run-containerd-runc-k8s.io-b08420d6fe328983dd2438f8c261e713497b7420bc85e27786f356502947a85c-runc.PxqpAu.mount: Deactivated successfully. Sep 6 01:23:30.190924 systemd[1]: run-containerd-runc-k8s.io-31c28442c4258c80440ede4c427d6dcda161e5b05d2e882f5c1546afc6c909cb-runc.fTDeQs.mount: Deactivated successfully. Sep 6 01:23:30.962074 kubelet[2729]: I0906 01:23:30.961725 2729 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 6 01:23:31.043000 audit[5777]: NETFILTER_CFG table=filter:131 family=2 entries=11 op=nft_register_rule pid=5777 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:23:31.043000 audit[5777]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3760 a0=3 a1=ffffe1c3b370 a2=0 a3=1 items=0 ppid=2834 pid=5777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:23:31.089672 kernel: audit: type=1325 audit(1757121811.043:448): table=filter:131 family=2 entries=11 op=nft_register_rule pid=5777 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:23:31.089829 kernel: audit: type=1300 audit(1757121811.043:448): arch=c00000b7 syscall=211 success=yes exit=3760 a0=3 a1=ffffe1c3b370 a2=0 a3=1 items=0 ppid=2834 pid=5777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:23:31.089949 kernel: audit: type=1327 audit(1757121811.043:448): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:23:31.043000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:23:31.104000 audit[5777]: NETFILTER_CFG table=nat:132 family=2 entries=29 op=nft_register_chain pid=5777 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:23:31.104000 audit[5777]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=10116 a0=3 a1=ffffe1c3b370 a2=0 a3=1 items=0 ppid=2834 pid=5777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:23:31.167163 kernel: audit: type=1325 audit(1757121811.104:449): table=nat:132 family=2 entries=29 op=nft_register_chain pid=5777 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:23:31.167331 kernel: audit: type=1300 audit(1757121811.104:449): arch=c00000b7 syscall=211 success=yes exit=10116 a0=3 a1=ffffe1c3b370 a2=0 a3=1 items=0 ppid=2834 pid=5777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:23:31.104000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:23:31.180624 kernel: audit: type=1327 audit(1757121811.104:449): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:23:33.718854 kubelet[2729]: I0906 01:23:33.718148 2729 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 6 01:23:33.783000 audit[5785]: NETFILTER_CFG table=filter:133 family=2 entries=10 op=nft_register_rule pid=5785 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:23:33.783000 audit[5785]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3760 a0=3 a1=ffffec8c95a0 a2=0 a3=1 items=0 ppid=2834 pid=5785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:23:33.833697 kernel: audit: type=1325 audit(1757121813.783:450): table=filter:133 family=2 entries=10 op=nft_register_rule pid=5785 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:23:33.833806 kernel: audit: type=1300 audit(1757121813.783:450): arch=c00000b7 syscall=211 success=yes exit=3760 a0=3 a1=ffffec8c95a0 a2=0 a3=1 items=0 ppid=2834 pid=5785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:23:33.783000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:23:33.851318 kernel: audit: type=1327 audit(1757121813.783:450): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:23:33.803000 audit[5785]: NETFILTER_CFG table=nat:134 family=2 entries=36 op=nft_register_chain pid=5785 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:23:33.866148 kernel: audit: type=1325 audit(1757121813.803:451): table=nat:134 family=2 entries=36 op=nft_register_chain pid=5785 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:23:33.803000 audit[5785]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=12004 a0=3 a1=ffffec8c95a0 a2=0 a3=1 items=0 ppid=2834 pid=5785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:23:33.803000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:23:42.770542 systemd[1]: run-containerd-runc-k8s.io-778cf78c78b5b76721b95a03843c8a4a964e3e704bbc7af7a34a84fdf4c227ba-runc.UdzmOm.mount: Deactivated successfully. Sep 6 01:23:42.878000 audit[5827]: NETFILTER_CFG table=filter:135 family=2 entries=9 op=nft_register_rule pid=5827 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:23:42.885045 kernel: kauditd_printk_skb: 2 callbacks suppressed Sep 6 01:23:42.885159 kernel: audit: type=1325 audit(1757121822.878:452): table=filter:135 family=2 entries=9 op=nft_register_rule pid=5827 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:23:42.878000 audit[5827]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3016 a0=3 a1=fffff77cfc00 a2=0 a3=1 items=0 ppid=2834 pid=5827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:23:42.924956 kernel: audit: type=1300 audit(1757121822.878:452): arch=c00000b7 syscall=211 success=yes exit=3016 a0=3 a1=fffff77cfc00 a2=0 a3=1 items=0 ppid=2834 pid=5827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:23:42.878000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:23:42.939898 kernel: audit: type=1327 audit(1757121822.878:452): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:23:42.927000 audit[5827]: NETFILTER_CFG table=nat:136 family=2 entries=31 op=nft_register_chain pid=5827 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:23:42.954715 kernel: audit: type=1325 audit(1757121822.927:453): table=nat:136 family=2 entries=31 op=nft_register_chain pid=5827 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:23:42.927000 audit[5827]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=10884 a0=3 a1=fffff77cfc00 a2=0 a3=1 items=0 ppid=2834 pid=5827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:23:42.979698 kernel: audit: type=1300 audit(1757121822.927:453): arch=c00000b7 syscall=211 success=yes exit=10884 a0=3 a1=fffff77cfc00 a2=0 a3=1 items=0 ppid=2834 pid=5827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:23:42.927000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:23:42.992711 kernel: audit: type=1327 audit(1757121822.927:453): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:24:29.404757 systemd[1]: Started sshd@7-10.200.20.27:22-10.200.16.10:41660.service. Sep 6 01:24:29.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.27:22-10.200.16.10:41660 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:24:29.426288 kernel: audit: type=1130 audit(1757121869.403:454): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.27:22-10.200.16.10:41660 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:24:29.877000 audit[5971]: USER_ACCT pid=5971 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:29.881220 sshd[5971]: Accepted publickey for core from 10.200.16.10 port 41660 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:24:29.902000 audit[5971]: CRED_ACQ pid=5971 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:29.904799 sshd[5971]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:24:29.927366 kernel: audit: type=1101 audit(1757121869.877:455): pid=5971 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:29.927933 kernel: audit: type=1103 audit(1757121869.902:456): pid=5971 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:29.927968 kernel: audit: type=1006 audit(1757121869.902:457): pid=5971 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Sep 6 01:24:29.902000 audit[5971]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffdaabd40 a2=3 a3=1 items=0 ppid=1 pid=5971 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:24:29.967195 kernel: audit: type=1300 audit(1757121869.902:457): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffdaabd40 a2=3 a3=1 items=0 ppid=1 pid=5971 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:24:29.902000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 01:24:29.970731 systemd[1]: Started session-10.scope. Sep 6 01:24:29.971186 systemd-logind[1571]: New session 10 of user core. Sep 6 01:24:29.975779 kernel: audit: type=1327 audit(1757121869.902:457): proctitle=737368643A20636F7265205B707269765D Sep 6 01:24:29.978621 kernel: audit: type=1105 audit(1757121869.975:458): pid=5971 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:29.975000 audit[5971]: USER_START pid=5971 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:29.975000 audit[5974]: CRED_ACQ pid=5974 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:30.026708 kernel: audit: type=1103 audit(1757121869.975:459): pid=5974 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:30.412456 sshd[5971]: pam_unix(sshd:session): session closed for user core Sep 6 01:24:30.412000 audit[5971]: USER_END pid=5971 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:30.416410 systemd[1]: sshd@7-10.200.20.27:22-10.200.16.10:41660.service: Deactivated successfully. Sep 6 01:24:30.417217 systemd[1]: session-10.scope: Deactivated successfully. Sep 6 01:24:30.413000 audit[5971]: CRED_DISP pid=5971 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:30.441912 systemd-logind[1571]: Session 10 logged out. Waiting for processes to exit. Sep 6 01:24:30.442810 systemd-logind[1571]: Removed session 10. Sep 6 01:24:30.460657 kernel: audit: type=1106 audit(1757121870.412:460): pid=5971 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:30.460783 kernel: audit: type=1104 audit(1757121870.413:461): pid=5971 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:30.415000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.27:22-10.200.16.10:41660 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:24:35.513270 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 6 01:24:35.513396 kernel: audit: type=1130 audit(1757121875.485:463): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.27:22-10.200.16.10:53958 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:24:35.485000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.27:22-10.200.16.10:53958 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:24:35.486307 systemd[1]: Started sshd@8-10.200.20.27:22-10.200.16.10:53958.service. Sep 6 01:24:35.938000 audit[6007]: USER_ACCT pid=6007 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:35.949497 sshd[6007]: Accepted publickey for core from 10.200.16.10 port 53958 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:24:35.963000 audit[6007]: CRED_ACQ pid=6007 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:35.965454 sshd[6007]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:24:35.985550 kernel: audit: type=1101 audit(1757121875.938:464): pid=6007 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:35.985672 kernel: audit: type=1103 audit(1757121875.963:465): pid=6007 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:36.001403 kernel: audit: type=1006 audit(1757121875.963:466): pid=6007 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Sep 6 01:24:35.963000 audit[6007]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff33ad740 a2=3 a3=1 items=0 ppid=1 pid=6007 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:24:36.023651 kernel: audit: type=1300 audit(1757121875.963:466): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff33ad740 a2=3 a3=1 items=0 ppid=1 pid=6007 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:24:35.963000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 01:24:36.024144 systemd-logind[1571]: New session 11 of user core. Sep 6 01:24:36.024993 systemd[1]: Started session-11.scope. Sep 6 01:24:36.034317 kernel: audit: type=1327 audit(1757121875.963:466): proctitle=737368643A20636F7265205B707269765D Sep 6 01:24:36.036000 audit[6007]: USER_START pid=6007 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:36.037000 audit[6010]: CRED_ACQ pid=6010 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:36.087744 kernel: audit: type=1105 audit(1757121876.036:467): pid=6007 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:36.087871 kernel: audit: type=1103 audit(1757121876.037:468): pid=6010 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:36.418605 sshd[6007]: pam_unix(sshd:session): session closed for user core Sep 6 01:24:36.418000 audit[6007]: USER_END pid=6007 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:36.421522 systemd[1]: sshd@8-10.200.20.27:22-10.200.16.10:53958.service: Deactivated successfully. Sep 6 01:24:36.422331 systemd[1]: session-11.scope: Deactivated successfully. Sep 6 01:24:36.445369 systemd-logind[1571]: Session 11 logged out. Waiting for processes to exit. Sep 6 01:24:36.446151 systemd-logind[1571]: Removed session 11. Sep 6 01:24:36.418000 audit[6007]: CRED_DISP pid=6007 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:36.468905 kernel: audit: type=1106 audit(1757121876.418:469): pid=6007 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:36.469015 kernel: audit: type=1104 audit(1757121876.418:470): pid=6007 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:36.418000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.27:22-10.200.16.10:53958 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:24:41.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.27:22-10.200.16.10:34278 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:24:41.495485 systemd[1]: Started sshd@9-10.200.20.27:22-10.200.16.10:34278.service. Sep 6 01:24:41.500756 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 6 01:24:41.500842 kernel: audit: type=1130 audit(1757121881.494:472): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.27:22-10.200.16.10:34278 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:24:41.912000 audit[6021]: USER_ACCT pid=6021 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:41.916900 sshd[6021]: Accepted publickey for core from 10.200.16.10 port 34278 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:24:41.938270 kernel: audit: type=1101 audit(1757121881.912:473): pid=6021 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:41.938370 kernel: audit: type=1103 audit(1757121881.936:474): pid=6021 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:41.936000 audit[6021]: CRED_ACQ pid=6021 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:41.939439 sshd[6021]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:24:41.975495 kernel: audit: type=1006 audit(1757121881.937:475): pid=6021 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Sep 6 01:24:41.937000 audit[6021]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff3865470 a2=3 a3=1 items=0 ppid=1 pid=6021 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:24:41.998078 kernel: audit: type=1300 audit(1757121881.937:475): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff3865470 a2=3 a3=1 items=0 ppid=1 pid=6021 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:24:41.937000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 01:24:42.001736 systemd[1]: Started session-12.scope. Sep 6 01:24:42.002689 systemd-logind[1571]: New session 12 of user core. Sep 6 01:24:42.007047 kernel: audit: type=1327 audit(1757121881.937:475): proctitle=737368643A20636F7265205B707269765D Sep 6 01:24:42.009289 kernel: audit: type=1105 audit(1757121882.005:476): pid=6021 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:42.005000 audit[6021]: USER_START pid=6021 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:42.005000 audit[6024]: CRED_ACQ pid=6024 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:42.053013 kernel: audit: type=1103 audit(1757121882.005:477): pid=6024 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:42.328554 sshd[6021]: pam_unix(sshd:session): session closed for user core Sep 6 01:24:42.328000 audit[6021]: USER_END pid=6021 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:42.331068 systemd[1]: sshd@9-10.200.20.27:22-10.200.16.10:34278.service: Deactivated successfully. Sep 6 01:24:42.331919 systemd[1]: session-12.scope: Deactivated successfully. Sep 6 01:24:42.328000 audit[6021]: CRED_DISP pid=6021 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:42.357458 systemd-logind[1571]: Session 12 logged out. Waiting for processes to exit. Sep 6 01:24:42.358416 systemd-logind[1571]: Removed session 12. Sep 6 01:24:42.377560 kernel: audit: type=1106 audit(1757121882.328:478): pid=6021 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:42.377718 kernel: audit: type=1104 audit(1757121882.328:479): pid=6021 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:42.330000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.27:22-10.200.16.10:34278 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:24:42.436362 systemd[1]: Started sshd@10-10.200.20.27:22-10.200.16.10:34294.service. Sep 6 01:24:42.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.20.27:22-10.200.16.10:34294 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:24:42.749579 systemd[1]: run-containerd-runc-k8s.io-778cf78c78b5b76721b95a03843c8a4a964e3e704bbc7af7a34a84fdf4c227ba-runc.7ZfBD9.mount: Deactivated successfully. Sep 6 01:24:42.938164 sshd[6034]: Accepted publickey for core from 10.200.16.10 port 34294 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:24:42.936000 audit[6034]: USER_ACCT pid=6034 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:42.937000 audit[6034]: CRED_ACQ pid=6034 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:42.937000 audit[6034]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc554fdf0 a2=3 a3=1 items=0 ppid=1 pid=6034 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:24:42.937000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 01:24:42.939515 sshd[6034]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:24:42.943504 systemd-logind[1571]: New session 13 of user core. Sep 6 01:24:42.943735 systemd[1]: Started session-13.scope. Sep 6 01:24:42.946000 audit[6034]: USER_START pid=6034 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:42.948000 audit[6078]: CRED_ACQ pid=6078 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:43.396486 sshd[6034]: pam_unix(sshd:session): session closed for user core Sep 6 01:24:43.396000 audit[6034]: USER_END pid=6034 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:43.396000 audit[6034]: CRED_DISP pid=6034 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:43.399696 systemd-logind[1571]: Session 13 logged out. Waiting for processes to exit. Sep 6 01:24:43.399000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.20.27:22-10.200.16.10:34294 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:24:43.400268 systemd[1]: sshd@10-10.200.20.27:22-10.200.16.10:34294.service: Deactivated successfully. Sep 6 01:24:43.401104 systemd[1]: session-13.scope: Deactivated successfully. Sep 6 01:24:43.401556 systemd-logind[1571]: Removed session 13. Sep 6 01:24:43.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.20.27:22-10.200.16.10:34304 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:24:43.463831 systemd[1]: Started sshd@11-10.200.20.27:22-10.200.16.10:34304.service. Sep 6 01:24:43.915000 audit[6087]: USER_ACCT pid=6087 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:43.916527 sshd[6087]: Accepted publickey for core from 10.200.16.10 port 34304 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:24:43.916000 audit[6087]: CRED_ACQ pid=6087 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:43.916000 audit[6087]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcead79b0 a2=3 a3=1 items=0 ppid=1 pid=6087 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:24:43.916000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 01:24:43.918194 sshd[6087]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:24:43.922319 systemd-logind[1571]: New session 14 of user core. Sep 6 01:24:43.922675 systemd[1]: Started session-14.scope. Sep 6 01:24:43.925000 audit[6087]: USER_START pid=6087 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:43.927000 audit[6090]: CRED_ACQ pid=6090 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:44.322323 sshd[6087]: pam_unix(sshd:session): session closed for user core Sep 6 01:24:44.322000 audit[6087]: USER_END pid=6087 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:44.322000 audit[6087]: CRED_DISP pid=6087 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:44.325235 systemd-logind[1571]: Session 14 logged out. Waiting for processes to exit. Sep 6 01:24:44.325000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.20.27:22-10.200.16.10:34304 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:24:44.326133 systemd[1]: sshd@11-10.200.20.27:22-10.200.16.10:34304.service: Deactivated successfully. Sep 6 01:24:44.326940 systemd[1]: session-14.scope: Deactivated successfully. Sep 6 01:24:44.328041 systemd-logind[1571]: Removed session 14. Sep 6 01:24:49.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.27:22-10.200.16.10:34310 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:24:49.396203 systemd[1]: Started sshd@12-10.200.20.27:22-10.200.16.10:34310.service. Sep 6 01:24:49.401446 kernel: kauditd_printk_skb: 23 callbacks suppressed Sep 6 01:24:49.401556 kernel: audit: type=1130 audit(1757121889.395:499): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.27:22-10.200.16.10:34310 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:24:49.845000 audit[6106]: USER_ACCT pid=6106 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:49.846756 sshd[6106]: Accepted publickey for core from 10.200.16.10 port 34310 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:24:49.848770 sshd[6106]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:24:49.847000 audit[6106]: CRED_ACQ pid=6106 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:49.878463 systemd[1]: Started session-15.scope. Sep 6 01:24:49.879639 systemd-logind[1571]: New session 15 of user core. Sep 6 01:24:49.891136 kernel: audit: type=1101 audit(1757121889.845:500): pid=6106 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:49.891263 kernel: audit: type=1103 audit(1757121889.847:501): pid=6106 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:49.906399 kernel: audit: type=1006 audit(1757121889.847:502): pid=6106 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Sep 6 01:24:49.847000 audit[6106]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe51146a0 a2=3 a3=1 items=0 ppid=1 pid=6106 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:24:49.929111 kernel: audit: type=1300 audit(1757121889.847:502): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe51146a0 a2=3 a3=1 items=0 ppid=1 pid=6106 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:24:49.847000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 01:24:49.938070 kernel: audit: type=1327 audit(1757121889.847:502): proctitle=737368643A20636F7265205B707269765D Sep 6 01:24:49.938287 kernel: audit: type=1105 audit(1757121889.883:503): pid=6106 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:49.883000 audit[6106]: USER_START pid=6106 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:49.891000 audit[6109]: CRED_ACQ pid=6109 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:49.983114 kernel: audit: type=1103 audit(1757121889.891:504): pid=6109 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:50.247574 sshd[6106]: pam_unix(sshd:session): session closed for user core Sep 6 01:24:50.247000 audit[6106]: USER_END pid=6106 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:50.254873 systemd-logind[1571]: Session 15 logged out. Waiting for processes to exit. Sep 6 01:24:50.257568 systemd[1]: sshd@12-10.200.20.27:22-10.200.16.10:34310.service: Deactivated successfully. Sep 6 01:24:50.258409 systemd[1]: session-15.scope: Deactivated successfully. Sep 6 01:24:50.259856 systemd-logind[1571]: Removed session 15. Sep 6 01:24:50.247000 audit[6106]: CRED_DISP pid=6106 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:50.295085 kernel: audit: type=1106 audit(1757121890.247:505): pid=6106 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:50.295200 kernel: audit: type=1104 audit(1757121890.247:506): pid=6106 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:50.252000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.27:22-10.200.16.10:34310 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:24:55.340442 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 6 01:24:55.340574 kernel: audit: type=1130 audit(1757121895.314:508): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.27:22-10.200.16.10:39588 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:24:55.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.27:22-10.200.16.10:39588 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:24:55.315736 systemd[1]: Started sshd@13-10.200.20.27:22-10.200.16.10:39588.service. Sep 6 01:24:55.362251 systemd[1]: run-containerd-runc-k8s.io-778cf78c78b5b76721b95a03843c8a4a964e3e704bbc7af7a34a84fdf4c227ba-runc.so1TXN.mount: Deactivated successfully. Sep 6 01:24:55.725000 audit[6118]: USER_ACCT pid=6118 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:55.733889 sshd[6118]: Accepted publickey for core from 10.200.16.10 port 39588 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:24:55.733762 sshd[6118]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:24:55.731000 audit[6118]: CRED_ACQ pid=6118 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:55.771705 kernel: audit: type=1101 audit(1757121895.725:509): pid=6118 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:55.771810 kernel: audit: type=1103 audit(1757121895.731:510): pid=6118 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:55.786915 kernel: audit: type=1006 audit(1757121895.731:511): pid=6118 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Sep 6 01:24:55.731000 audit[6118]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcd35d350 a2=3 a3=1 items=0 ppid=1 pid=6118 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:24:55.811149 kernel: audit: type=1300 audit(1757121895.731:511): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcd35d350 a2=3 a3=1 items=0 ppid=1 pid=6118 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:24:55.790881 systemd[1]: Started session-16.scope. Sep 6 01:24:55.810779 systemd-logind[1571]: New session 16 of user core. Sep 6 01:24:55.731000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 01:24:55.822398 kernel: audit: type=1327 audit(1757121895.731:511): proctitle=737368643A20636F7265205B707269765D Sep 6 01:24:55.824000 audit[6118]: USER_START pid=6118 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:55.849000 audit[6139]: CRED_ACQ pid=6139 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:55.871034 kernel: audit: type=1105 audit(1757121895.824:512): pid=6118 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:55.871121 kernel: audit: type=1103 audit(1757121895.849:513): pid=6139 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:56.158375 sshd[6118]: pam_unix(sshd:session): session closed for user core Sep 6 01:24:56.158000 audit[6118]: USER_END pid=6118 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:56.162546 systemd-logind[1571]: Session 16 logged out. Waiting for processes to exit. Sep 6 01:24:56.163926 systemd[1]: sshd@13-10.200.20.27:22-10.200.16.10:39588.service: Deactivated successfully. Sep 6 01:24:56.164800 systemd[1]: session-16.scope: Deactivated successfully. Sep 6 01:24:56.166419 systemd-logind[1571]: Removed session 16. Sep 6 01:24:56.158000 audit[6118]: CRED_DISP pid=6118 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:56.205605 kernel: audit: type=1106 audit(1757121896.158:514): pid=6118 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:56.205730 kernel: audit: type=1104 audit(1757121896.158:515): pid=6118 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:24:56.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.27:22-10.200.16.10:39588 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:25:00.175714 systemd[1]: run-containerd-runc-k8s.io-31c28442c4258c80440ede4c427d6dcda161e5b05d2e882f5c1546afc6c909cb-runc.7iAU14.mount: Deactivated successfully. Sep 6 01:25:01.227402 systemd[1]: Started sshd@14-10.200.20.27:22-10.200.16.10:46662.service. Sep 6 01:25:01.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.27:22-10.200.16.10:46662 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:25:01.234405 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 6 01:25:01.234472 kernel: audit: type=1130 audit(1757121901.227:517): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.27:22-10.200.16.10:46662 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:25:01.651233 sshd[6172]: Accepted publickey for core from 10.200.16.10 port 46662 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:25:01.649000 audit[6172]: USER_ACCT pid=6172 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:01.674074 sshd[6172]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:25:01.672000 audit[6172]: CRED_ACQ pid=6172 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:01.695761 kernel: audit: type=1101 audit(1757121901.649:518): pid=6172 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:01.695906 kernel: audit: type=1103 audit(1757121901.672:519): pid=6172 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:01.709402 kernel: audit: type=1006 audit(1757121901.672:520): pid=6172 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Sep 6 01:25:01.672000 audit[6172]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc48d62c0 a2=3 a3=1 items=0 ppid=1 pid=6172 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:25:01.731890 kernel: audit: type=1300 audit(1757121901.672:520): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc48d62c0 a2=3 a3=1 items=0 ppid=1 pid=6172 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:25:01.672000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 01:25:01.741224 kernel: audit: type=1327 audit(1757121901.672:520): proctitle=737368643A20636F7265205B707269765D Sep 6 01:25:01.742415 systemd[1]: Started session-17.scope. Sep 6 01:25:01.742637 systemd-logind[1571]: New session 17 of user core. Sep 6 01:25:01.746000 audit[6172]: USER_START pid=6172 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:01.747000 audit[6175]: CRED_ACQ pid=6175 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:01.792650 kernel: audit: type=1105 audit(1757121901.746:521): pid=6172 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:01.792783 kernel: audit: type=1103 audit(1757121901.747:522): pid=6175 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:02.082979 sshd[6172]: pam_unix(sshd:session): session closed for user core Sep 6 01:25:02.083000 audit[6172]: USER_END pid=6172 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:02.113392 systemd[1]: sshd@14-10.200.20.27:22-10.200.16.10:46662.service: Deactivated successfully. Sep 6 01:25:02.114176 systemd[1]: session-17.scope: Deactivated successfully. Sep 6 01:25:02.114356 systemd-logind[1571]: Session 17 logged out. Waiting for processes to exit. Sep 6 01:25:02.115334 systemd-logind[1571]: Removed session 17. Sep 6 01:25:02.110000 audit[6172]: CRED_DISP pid=6172 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:02.139415 kernel: audit: type=1106 audit(1757121902.083:523): pid=6172 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:02.139509 kernel: audit: type=1104 audit(1757121902.110:524): pid=6172 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:02.112000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.27:22-10.200.16.10:46662 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:25:07.150493 systemd[1]: Started sshd@15-10.200.20.27:22-10.200.16.10:46666.service. Sep 6 01:25:07.176136 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 6 01:25:07.176252 kernel: audit: type=1130 audit(1757121907.149:526): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.27:22-10.200.16.10:46666 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:25:07.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.27:22-10.200.16.10:46666 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:25:07.566000 audit[6184]: USER_ACCT pid=6184 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:07.568261 sshd[6184]: Accepted publickey for core from 10.200.16.10 port 46666 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:25:07.569963 sshd[6184]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:25:07.568000 audit[6184]: CRED_ACQ pid=6184 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:07.612574 kernel: audit: type=1101 audit(1757121907.566:527): pid=6184 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:07.612699 kernel: audit: type=1103 audit(1757121907.568:528): pid=6184 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:07.616960 kernel: audit: type=1006 audit(1757121907.568:529): pid=6184 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Sep 6 01:25:07.615642 systemd[1]: Started session-18.scope. Sep 6 01:25:07.616660 systemd-logind[1571]: New session 18 of user core. Sep 6 01:25:07.568000 audit[6184]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc21c9940 a2=3 a3=1 items=0 ppid=1 pid=6184 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:25:07.651678 kernel: audit: type=1300 audit(1757121907.568:529): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc21c9940 a2=3 a3=1 items=0 ppid=1 pid=6184 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:25:07.568000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 01:25:07.659994 kernel: audit: type=1327 audit(1757121907.568:529): proctitle=737368643A20636F7265205B707269765D Sep 6 01:25:07.628000 audit[6184]: USER_START pid=6184 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:07.684531 kernel: audit: type=1105 audit(1757121907.628:530): pid=6184 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:07.638000 audit[6187]: CRED_ACQ pid=6187 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:07.705809 kernel: audit: type=1103 audit(1757121907.638:531): pid=6187 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:07.935140 sshd[6184]: pam_unix(sshd:session): session closed for user core Sep 6 01:25:07.934000 audit[6184]: USER_END pid=6184 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:07.937660 systemd[1]: sshd@15-10.200.20.27:22-10.200.16.10:46666.service: Deactivated successfully. Sep 6 01:25:07.938485 systemd[1]: session-18.scope: Deactivated successfully. Sep 6 01:25:07.961292 systemd-logind[1571]: Session 18 logged out. Waiting for processes to exit. Sep 6 01:25:07.934000 audit[6184]: CRED_DISP pid=6184 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:07.962459 systemd-logind[1571]: Removed session 18. Sep 6 01:25:07.982674 kernel: audit: type=1106 audit(1757121907.934:532): pid=6184 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:07.982778 kernel: audit: type=1104 audit(1757121907.934:533): pid=6184 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:07.936000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.27:22-10.200.16.10:46666 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:25:08.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.20.27:22-10.200.16.10:46674 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:25:08.001797 systemd[1]: Started sshd@16-10.200.20.27:22-10.200.16.10:46674.service. Sep 6 01:25:08.412000 audit[6197]: USER_ACCT pid=6197 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:08.414046 sshd[6197]: Accepted publickey for core from 10.200.16.10 port 46674 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:25:08.414000 audit[6197]: CRED_ACQ pid=6197 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:08.414000 audit[6197]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc8d3f8f0 a2=3 a3=1 items=0 ppid=1 pid=6197 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:25:08.414000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 01:25:08.415681 sshd[6197]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:25:08.420125 systemd[1]: Started session-19.scope. Sep 6 01:25:08.420337 systemd-logind[1571]: New session 19 of user core. Sep 6 01:25:08.423000 audit[6197]: USER_START pid=6197 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:08.424000 audit[6200]: CRED_ACQ pid=6200 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:08.978730 sshd[6197]: pam_unix(sshd:session): session closed for user core Sep 6 01:25:08.978000 audit[6197]: USER_END pid=6197 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:08.978000 audit[6197]: CRED_DISP pid=6197 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:08.980000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.20.27:22-10.200.16.10:46674 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:25:08.981263 systemd[1]: sshd@16-10.200.20.27:22-10.200.16.10:46674.service: Deactivated successfully. Sep 6 01:25:08.982530 systemd[1]: session-19.scope: Deactivated successfully. Sep 6 01:25:08.982821 systemd-logind[1571]: Session 19 logged out. Waiting for processes to exit. Sep 6 01:25:08.983951 systemd-logind[1571]: Removed session 19. Sep 6 01:25:09.064791 systemd[1]: Started sshd@17-10.200.20.27:22-10.200.16.10:46688.service. Sep 6 01:25:09.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.20.27:22-10.200.16.10:46688 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:25:09.556374 sshd[6208]: Accepted publickey for core from 10.200.16.10 port 46688 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:25:09.555000 audit[6208]: USER_ACCT pid=6208 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:09.558051 sshd[6208]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:25:09.556000 audit[6208]: CRED_ACQ pid=6208 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:09.556000 audit[6208]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdeef2a40 a2=3 a3=1 items=0 ppid=1 pid=6208 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:25:09.556000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 01:25:09.562373 systemd[1]: Started session-20.scope. Sep 6 01:25:09.562549 systemd-logind[1571]: New session 20 of user core. Sep 6 01:25:09.565000 audit[6208]: USER_START pid=6208 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:09.567000 audit[6211]: CRED_ACQ pid=6211 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:11.542000 audit[6223]: NETFILTER_CFG table=filter:137 family=2 entries=20 op=nft_register_rule pid=6223 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:25:11.542000 audit[6223]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11944 a0=3 a1=ffffe768f3a0 a2=0 a3=1 items=0 ppid=2834 pid=6223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:25:11.542000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:25:11.549000 audit[6223]: NETFILTER_CFG table=nat:138 family=2 entries=26 op=nft_register_rule pid=6223 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:25:11.549000 audit[6223]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8076 a0=3 a1=ffffe768f3a0 a2=0 a3=1 items=0 ppid=2834 pid=6223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:25:11.549000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:25:11.562000 audit[6225]: NETFILTER_CFG table=filter:139 family=2 entries=32 op=nft_register_rule pid=6225 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:25:11.562000 audit[6225]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11944 a0=3 a1=ffffc7826360 a2=0 a3=1 items=0 ppid=2834 pid=6225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:25:11.562000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:25:11.568000 audit[6225]: NETFILTER_CFG table=nat:140 family=2 entries=26 op=nft_register_rule pid=6225 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:25:11.568000 audit[6225]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8076 a0=3 a1=ffffc7826360 a2=0 a3=1 items=0 ppid=2834 pid=6225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:25:11.568000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:25:11.650257 sshd[6208]: pam_unix(sshd:session): session closed for user core Sep 6 01:25:11.649000 audit[6208]: USER_END pid=6208 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:11.649000 audit[6208]: CRED_DISP pid=6208 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:11.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.20.27:22-10.200.16.10:46688 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:25:11.652766 systemd[1]: sshd@17-10.200.20.27:22-10.200.16.10:46688.service: Deactivated successfully. Sep 6 01:25:11.653765 systemd[1]: session-20.scope: Deactivated successfully. Sep 6 01:25:11.654120 systemd-logind[1571]: Session 20 logged out. Waiting for processes to exit. Sep 6 01:25:11.655406 systemd-logind[1571]: Removed session 20. Sep 6 01:25:11.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.20.27:22-10.200.16.10:50438 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:25:11.723163 systemd[1]: Started sshd@18-10.200.20.27:22-10.200.16.10:50438.service. Sep 6 01:25:12.177000 audit[6228]: USER_ACCT pid=6228 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:12.180226 sshd[6228]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:25:12.180783 sshd[6228]: Accepted publickey for core from 10.200.16.10 port 50438 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:25:12.183789 kernel: kauditd_printk_skb: 36 callbacks suppressed Sep 6 01:25:12.183892 kernel: audit: type=1101 audit(1757121912.177:558): pid=6228 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:12.178000 audit[6228]: CRED_ACQ pid=6228 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:12.230492 kernel: audit: type=1103 audit(1757121912.178:559): pid=6228 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:12.230739 kernel: audit: type=1006 audit(1757121912.178:560): pid=6228 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Sep 6 01:25:12.178000 audit[6228]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffce72e6a0 a2=3 a3=1 items=0 ppid=1 pid=6228 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:25:12.178000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 01:25:12.273983 systemd[1]: Started session-21.scope. Sep 6 01:25:12.275059 systemd-logind[1571]: New session 21 of user core. Sep 6 01:25:12.277603 kernel: audit: type=1300 audit(1757121912.178:560): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffce72e6a0 a2=3 a3=1 items=0 ppid=1 pid=6228 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:25:12.277674 kernel: audit: type=1327 audit(1757121912.178:560): proctitle=737368643A20636F7265205B707269765D Sep 6 01:25:12.283000 audit[6228]: USER_START pid=6228 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:12.284000 audit[6231]: CRED_ACQ pid=6231 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:12.332732 kernel: audit: type=1105 audit(1757121912.283:561): pid=6228 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:12.332871 kernel: audit: type=1103 audit(1757121912.284:562): pid=6231 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:12.750475 sshd[6228]: pam_unix(sshd:session): session closed for user core Sep 6 01:25:12.753000 audit[6228]: USER_END pid=6228 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:12.780965 systemd-logind[1571]: Session 21 logged out. Waiting for processes to exit. Sep 6 01:25:12.784793 systemd[1]: sshd@18-10.200.20.27:22-10.200.16.10:50438.service: Deactivated successfully. Sep 6 01:25:12.791966 systemd[1]: run-containerd-runc-k8s.io-b08420d6fe328983dd2438f8c261e713497b7420bc85e27786f356502947a85c-runc.5WvwsT.mount: Deactivated successfully. Sep 6 01:25:12.793162 systemd[1]: session-21.scope: Deactivated successfully. Sep 6 01:25:12.753000 audit[6228]: CRED_DISP pid=6228 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:12.817856 kernel: audit: type=1106 audit(1757121912.753:563): pid=6228 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:12.817945 kernel: audit: type=1104 audit(1757121912.753:564): pid=6228 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:12.784000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.20.27:22-10.200.16.10:50438 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:25:12.818477 systemd-logind[1571]: Removed session 21. Sep 6 01:25:12.843570 kernel: audit: type=1131 audit(1757121912.784:565): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.20.27:22-10.200.16.10:50438 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:25:12.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.27:22-10.200.16.10:50448 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:25:12.822654 systemd[1]: Started sshd@19-10.200.20.27:22-10.200.16.10:50448.service. Sep 6 01:25:13.294000 audit[6273]: USER_ACCT pid=6273 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:13.295596 sshd[6273]: Accepted publickey for core from 10.200.16.10 port 50448 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:25:13.295000 audit[6273]: CRED_ACQ pid=6273 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:13.295000 audit[6273]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc25f67f0 a2=3 a3=1 items=0 ppid=1 pid=6273 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:25:13.295000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 01:25:13.297158 sshd[6273]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:25:13.301024 systemd-logind[1571]: New session 22 of user core. Sep 6 01:25:13.301425 systemd[1]: Started session-22.scope. Sep 6 01:25:13.304000 audit[6273]: USER_START pid=6273 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:13.305000 audit[6280]: CRED_ACQ pid=6280 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:13.688594 sshd[6273]: pam_unix(sshd:session): session closed for user core Sep 6 01:25:13.688000 audit[6273]: USER_END pid=6273 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:13.688000 audit[6273]: CRED_DISP pid=6273 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:13.690000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.27:22-10.200.16.10:50448 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:25:13.691062 systemd[1]: sshd@19-10.200.20.27:22-10.200.16.10:50448.service: Deactivated successfully. Sep 6 01:25:13.692212 systemd-logind[1571]: Session 22 logged out. Waiting for processes to exit. Sep 6 01:25:13.692297 systemd[1]: session-22.scope: Deactivated successfully. Sep 6 01:25:13.693363 systemd-logind[1571]: Removed session 22. Sep 6 01:25:17.261000 audit[6293]: NETFILTER_CFG table=filter:141 family=2 entries=20 op=nft_register_rule pid=6293 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:25:17.280694 kernel: kauditd_printk_skb: 11 callbacks suppressed Sep 6 01:25:17.280848 kernel: audit: type=1325 audit(1757121917.261:575): table=filter:141 family=2 entries=20 op=nft_register_rule pid=6293 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:25:17.261000 audit[6293]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3016 a0=3 a1=ffffcf63eba0 a2=0 a3=1 items=0 ppid=2834 pid=6293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:25:17.305810 kernel: audit: type=1300 audit(1757121917.261:575): arch=c00000b7 syscall=211 success=yes exit=3016 a0=3 a1=ffffcf63eba0 a2=0 a3=1 items=0 ppid=2834 pid=6293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:25:17.261000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:25:17.318692 kernel: audit: type=1327 audit(1757121917.261:575): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:25:17.319000 audit[6293]: NETFILTER_CFG table=nat:142 family=2 entries=110 op=nft_register_chain pid=6293 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:25:17.319000 audit[6293]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=50988 a0=3 a1=ffffcf63eba0 a2=0 a3=1 items=0 ppid=2834 pid=6293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:25:17.358731 kernel: audit: type=1325 audit(1757121917.319:576): table=nat:142 family=2 entries=110 op=nft_register_chain pid=6293 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:25:17.358806 kernel: audit: type=1300 audit(1757121917.319:576): arch=c00000b7 syscall=211 success=yes exit=50988 a0=3 a1=ffffcf63eba0 a2=0 a3=1 items=0 ppid=2834 pid=6293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:25:17.319000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:25:17.371468 kernel: audit: type=1327 audit(1757121917.319:576): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:25:18.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.27:22-10.200.16.10:50458 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:25:18.756554 systemd[1]: Started sshd@20-10.200.20.27:22-10.200.16.10:50458.service. Sep 6 01:25:18.778398 kernel: audit: type=1130 audit(1757121918.755:577): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.27:22-10.200.16.10:50458 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:25:19.205000 audit[6295]: USER_ACCT pid=6295 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:19.207044 sshd[6295]: Accepted publickey for core from 10.200.16.10 port 50458 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:25:19.229301 kernel: audit: type=1101 audit(1757121919.205:578): pid=6295 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:19.229000 audit[6295]: CRED_ACQ pid=6295 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:19.230881 sshd[6295]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:25:19.266619 kernel: audit: type=1103 audit(1757121919.229:579): pid=6295 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:19.266724 kernel: audit: type=1006 audit(1757121919.229:580): pid=6295 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Sep 6 01:25:19.229000 audit[6295]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe20c2a10 a2=3 a3=1 items=0 ppid=1 pid=6295 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:25:19.229000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 01:25:19.270461 systemd-logind[1571]: New session 23 of user core. Sep 6 01:25:19.270862 systemd[1]: Started session-23.scope. Sep 6 01:25:19.275000 audit[6295]: USER_START pid=6295 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:19.276000 audit[6298]: CRED_ACQ pid=6298 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:19.633946 sshd[6295]: pam_unix(sshd:session): session closed for user core Sep 6 01:25:19.633000 audit[6295]: USER_END pid=6295 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:19.634000 audit[6295]: CRED_DISP pid=6295 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:19.636489 systemd[1]: sshd@20-10.200.20.27:22-10.200.16.10:50458.service: Deactivated successfully. Sep 6 01:25:19.637292 systemd[1]: session-23.scope: Deactivated successfully. Sep 6 01:25:19.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.27:22-10.200.16.10:50458 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:25:19.638226 systemd-logind[1571]: Session 23 logged out. Waiting for processes to exit. Sep 6 01:25:19.639050 systemd-logind[1571]: Removed session 23. Sep 6 01:25:24.707866 systemd[1]: Started sshd@21-10.200.20.27:22-10.200.16.10:39632.service. Sep 6 01:25:24.735606 kernel: kauditd_printk_skb: 7 callbacks suppressed Sep 6 01:25:24.735673 kernel: audit: type=1130 audit(1757121924.707:586): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.27:22-10.200.16.10:39632 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:25:24.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.27:22-10.200.16.10:39632 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:25:25.158000 audit[6308]: USER_ACCT pid=6308 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:25.161790 sshd[6308]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:25:25.166293 sshd[6308]: Accepted publickey for core from 10.200.16.10 port 39632 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:25:25.160000 audit[6308]: CRED_ACQ pid=6308 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:25.207161 kernel: audit: type=1101 audit(1757121925.158:587): pid=6308 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:25.207306 kernel: audit: type=1103 audit(1757121925.160:588): pid=6308 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:25.221042 kernel: audit: type=1006 audit(1757121925.160:589): pid=6308 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Sep 6 01:25:25.160000 audit[6308]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdbb79800 a2=3 a3=1 items=0 ppid=1 pid=6308 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:25:25.244186 kernel: audit: type=1300 audit(1757121925.160:589): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdbb79800 a2=3 a3=1 items=0 ppid=1 pid=6308 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:25:25.160000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 01:25:25.245590 systemd[1]: Started session-24.scope. Sep 6 01:25:25.253084 kernel: audit: type=1327 audit(1757121925.160:589): proctitle=737368643A20636F7265205B707269765D Sep 6 01:25:25.253267 systemd-logind[1571]: New session 24 of user core. Sep 6 01:25:25.258000 audit[6308]: USER_START pid=6308 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:25.286000 audit[6314]: CRED_ACQ pid=6314 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:25.308996 kernel: audit: type=1105 audit(1757121925.258:590): pid=6308 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:25.309114 kernel: audit: type=1103 audit(1757121925.286:591): pid=6314 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:25.630782 sshd[6308]: pam_unix(sshd:session): session closed for user core Sep 6 01:25:25.630000 audit[6308]: USER_END pid=6308 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:25.634191 systemd-logind[1571]: Session 24 logged out. Waiting for processes to exit. Sep 6 01:25:25.635441 systemd[1]: sshd@21-10.200.20.27:22-10.200.16.10:39632.service: Deactivated successfully. Sep 6 01:25:25.636217 systemd[1]: session-24.scope: Deactivated successfully. Sep 6 01:25:25.637599 systemd-logind[1571]: Removed session 24. Sep 6 01:25:25.630000 audit[6308]: CRED_DISP pid=6308 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:25.678659 kernel: audit: type=1106 audit(1757121925.630:592): pid=6308 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:25.678792 kernel: audit: type=1104 audit(1757121925.630:593): pid=6308 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:25.630000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.27:22-10.200.16.10:39632 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:25:26.283341 systemd[1]: run-containerd-runc-k8s.io-b08420d6fe328983dd2438f8c261e713497b7420bc85e27786f356502947a85c-runc.Qggjn4.mount: Deactivated successfully. Sep 6 01:25:30.171936 systemd[1]: run-containerd-runc-k8s.io-31c28442c4258c80440ede4c427d6dcda161e5b05d2e882f5c1546afc6c909cb-runc.lzm4zw.mount: Deactivated successfully. Sep 6 01:25:30.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.27:22-10.200.16.10:32854 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:25:30.699135 systemd[1]: Started sshd@22-10.200.20.27:22-10.200.16.10:32854.service. Sep 6 01:25:30.704120 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 6 01:25:30.704228 kernel: audit: type=1130 audit(1757121930.698:595): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.27:22-10.200.16.10:32854 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:25:31.116000 audit[6366]: USER_ACCT pid=6366 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:31.118519 sshd[6366]: Accepted publickey for core from 10.200.16.10 port 32854 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:25:31.142000 audit[6366]: CRED_ACQ pid=6366 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:31.144110 sshd[6366]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:25:31.149857 systemd[1]: Started session-25.scope. Sep 6 01:25:31.150822 systemd-logind[1571]: New session 25 of user core. Sep 6 01:25:31.166772 kernel: audit: type=1101 audit(1757121931.116:596): pid=6366 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:31.166875 kernel: audit: type=1103 audit(1757121931.142:597): pid=6366 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:31.180178 kernel: audit: type=1006 audit(1757121931.142:598): pid=6366 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Sep 6 01:25:31.180288 kernel: audit: type=1300 audit(1757121931.142:598): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffe0db1b0 a2=3 a3=1 items=0 ppid=1 pid=6366 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:25:31.142000 audit[6366]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffe0db1b0 a2=3 a3=1 items=0 ppid=1 pid=6366 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:25:31.142000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 01:25:31.211232 kernel: audit: type=1327 audit(1757121931.142:598): proctitle=737368643A20636F7265205B707269765D Sep 6 01:25:31.211562 kernel: audit: type=1105 audit(1757121931.154:599): pid=6366 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:31.154000 audit[6366]: USER_START pid=6366 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:31.155000 audit[6369]: CRED_ACQ pid=6369 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:31.255655 kernel: audit: type=1103 audit(1757121931.155:600): pid=6369 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:31.513146 sshd[6366]: pam_unix(sshd:session): session closed for user core Sep 6 01:25:31.512000 audit[6366]: USER_END pid=6366 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:31.515375 systemd[1]: sshd@22-10.200.20.27:22-10.200.16.10:32854.service: Deactivated successfully. Sep 6 01:25:31.516151 systemd[1]: session-25.scope: Deactivated successfully. Sep 6 01:25:31.517159 systemd-logind[1571]: Session 25 logged out. Waiting for processes to exit. Sep 6 01:25:31.518591 systemd-logind[1571]: Removed session 25. Sep 6 01:25:31.512000 audit[6366]: CRED_DISP pid=6366 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:31.558377 kernel: audit: type=1106 audit(1757121931.512:601): pid=6366 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:31.558477 kernel: audit: type=1104 audit(1757121931.512:602): pid=6366 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:31.514000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.27:22-10.200.16.10:32854 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:25:36.579950 systemd[1]: Started sshd@23-10.200.20.27:22-10.200.16.10:32868.service. Sep 6 01:25:36.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.27:22-10.200.16.10:32868 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:25:36.585607 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 6 01:25:36.585662 kernel: audit: type=1130 audit(1757121936.578:604): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.27:22-10.200.16.10:32868 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:25:36.991000 audit[6386]: USER_ACCT pid=6386 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:36.992995 sshd[6386]: Accepted publickey for core from 10.200.16.10 port 32868 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:25:37.016275 kernel: audit: type=1101 audit(1757121936.991:605): pid=6386 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:37.015000 audit[6386]: CRED_ACQ pid=6386 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:37.017486 sshd[6386]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:25:37.053202 kernel: audit: type=1103 audit(1757121937.015:606): pid=6386 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:37.053328 kernel: audit: type=1006 audit(1757121937.015:607): pid=6386 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Sep 6 01:25:37.015000 audit[6386]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff9891960 a2=3 a3=1 items=0 ppid=1 pid=6386 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:25:37.077228 kernel: audit: type=1300 audit(1757121937.015:607): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff9891960 a2=3 a3=1 items=0 ppid=1 pid=6386 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:25:37.015000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 01:25:37.085725 kernel: audit: type=1327 audit(1757121937.015:607): proctitle=737368643A20636F7265205B707269765D Sep 6 01:25:37.088827 systemd[1]: Started session-26.scope. Sep 6 01:25:37.089646 systemd-logind[1571]: New session 26 of user core. Sep 6 01:25:37.093000 audit[6386]: USER_START pid=6386 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:37.118000 audit[6389]: CRED_ACQ pid=6389 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:37.140802 kernel: audit: type=1105 audit(1757121937.093:608): pid=6386 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:37.140945 kernel: audit: type=1103 audit(1757121937.118:609): pid=6389 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:37.419454 sshd[6386]: pam_unix(sshd:session): session closed for user core Sep 6 01:25:37.419000 audit[6386]: USER_END pid=6386 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:37.420000 audit[6386]: CRED_DISP pid=6386 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:37.445485 systemd[1]: sshd@23-10.200.20.27:22-10.200.16.10:32868.service: Deactivated successfully. Sep 6 01:25:37.465687 kernel: audit: type=1106 audit(1757121937.419:610): pid=6386 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:37.465783 kernel: audit: type=1104 audit(1757121937.420:611): pid=6386 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:37.444000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.27:22-10.200.16.10:32868 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:25:37.466207 systemd[1]: session-26.scope: Deactivated successfully. Sep 6 01:25:37.466786 systemd-logind[1571]: Session 26 logged out. Waiting for processes to exit. Sep 6 01:25:37.467808 systemd-logind[1571]: Removed session 26. Sep 6 01:25:42.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.20.27:22-10.200.16.10:39858 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:25:42.487998 systemd[1]: Started sshd@24-10.200.20.27:22-10.200.16.10:39858.service. Sep 6 01:25:42.493361 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 6 01:25:42.493477 kernel: audit: type=1130 audit(1757121942.486:613): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.20.27:22-10.200.16.10:39858 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:25:42.746758 systemd[1]: run-containerd-runc-k8s.io-778cf78c78b5b76721b95a03843c8a4a964e3e704bbc7af7a34a84fdf4c227ba-runc.ASxvAi.mount: Deactivated successfully. Sep 6 01:25:42.901000 audit[6403]: USER_ACCT pid=6403 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:42.903610 sshd[6403]: Accepted publickey for core from 10.200.16.10 port 39858 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:25:42.904554 sshd[6403]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:25:42.902000 audit[6403]: CRED_ACQ pid=6403 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:42.947587 kernel: audit: type=1101 audit(1757121942.901:614): pid=6403 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:42.947704 kernel: audit: type=1103 audit(1757121942.902:615): pid=6403 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:42.961756 kernel: audit: type=1006 audit(1757121942.902:616): pid=6403 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Sep 6 01:25:42.902000 audit[6403]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd26b4150 a2=3 a3=1 items=0 ppid=1 pid=6403 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:25:42.986247 kernel: audit: type=1300 audit(1757121942.902:616): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd26b4150 a2=3 a3=1 items=0 ppid=1 pid=6403 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:25:42.902000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 01:25:42.989127 systemd[1]: Started session-27.scope. Sep 6 01:25:42.994580 kernel: audit: type=1327 audit(1757121942.902:616): proctitle=737368643A20636F7265205B707269765D Sep 6 01:25:42.994664 systemd-logind[1571]: New session 27 of user core. Sep 6 01:25:42.999000 audit[6403]: USER_START pid=6403 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:43.000000 audit[6446]: CRED_ACQ pid=6446 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:43.046409 kernel: audit: type=1105 audit(1757121942.999:617): pid=6403 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:43.046495 kernel: audit: type=1103 audit(1757121943.000:618): pid=6446 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:43.316448 sshd[6403]: pam_unix(sshd:session): session closed for user core Sep 6 01:25:43.316000 audit[6403]: USER_END pid=6403 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:43.321466 systemd[1]: sshd@24-10.200.20.27:22-10.200.16.10:39858.service: Deactivated successfully. Sep 6 01:25:43.322265 systemd[1]: session-27.scope: Deactivated successfully. Sep 6 01:25:43.344343 systemd-logind[1571]: Session 27 logged out. Waiting for processes to exit. Sep 6 01:25:43.318000 audit[6403]: CRED_DISP pid=6403 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:43.365867 kernel: audit: type=1106 audit(1757121943.316:619): pid=6403 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:43.366063 kernel: audit: type=1104 audit(1757121943.318:620): pid=6403 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:43.320000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.20.27:22-10.200.16.10:39858 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:25:43.366689 systemd-logind[1571]: Removed session 27. Sep 6 01:25:43.736986 systemd[1]: run-containerd-runc-k8s.io-b08420d6fe328983dd2438f8c261e713497b7420bc85e27786f356502947a85c-runc.02UKDK.mount: Deactivated successfully. Sep 6 01:25:48.384145 systemd[1]: Started sshd@25-10.200.20.27:22-10.200.16.10:39870.service. Sep 6 01:25:48.409323 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 6 01:25:48.409356 kernel: audit: type=1130 audit(1757121948.383:622): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.20.27:22-10.200.16.10:39870 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:25:48.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.20.27:22-10.200.16.10:39870 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:25:48.801000 audit[6461]: USER_ACCT pid=6461 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:48.802700 sshd[6461]: Accepted publickey for core from 10.200.16.10 port 39870 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:25:48.804420 sshd[6461]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:25:48.802000 audit[6461]: CRED_ACQ pid=6461 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:48.846730 kernel: audit: type=1101 audit(1757121948.801:623): pid=6461 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:48.846842 kernel: audit: type=1103 audit(1757121948.802:624): pid=6461 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:48.846872 kernel: audit: type=1006 audit(1757121948.802:625): pid=6461 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Sep 6 01:25:48.802000 audit[6461]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc560b0f0 a2=3 a3=1 items=0 ppid=1 pid=6461 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:25:48.882602 kernel: audit: type=1300 audit(1757121948.802:625): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc560b0f0 a2=3 a3=1 items=0 ppid=1 pid=6461 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:25:48.802000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 01:25:48.885585 systemd[1]: Started session-28.scope. Sep 6 01:25:48.890397 kernel: audit: type=1327 audit(1757121948.802:625): proctitle=737368643A20636F7265205B707269765D Sep 6 01:25:48.890487 systemd-logind[1571]: New session 28 of user core. Sep 6 01:25:48.894000 audit[6461]: USER_START pid=6461 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:48.895000 audit[6464]: CRED_ACQ pid=6464 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:48.941052 kernel: audit: type=1105 audit(1757121948.894:626): pid=6461 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:48.941180 kernel: audit: type=1103 audit(1757121948.895:627): pid=6464 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:49.224462 sshd[6461]: pam_unix(sshd:session): session closed for user core Sep 6 01:25:49.224000 audit[6461]: USER_END pid=6461 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:49.230898 systemd[1]: sshd@25-10.200.20.27:22-10.200.16.10:39870.service: Deactivated successfully. Sep 6 01:25:49.231728 systemd[1]: session-28.scope: Deactivated successfully. Sep 6 01:25:49.250486 systemd-logind[1571]: Session 28 logged out. Waiting for processes to exit. Sep 6 01:25:49.228000 audit[6461]: CRED_DISP pid=6461 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:49.272117 kernel: audit: type=1106 audit(1757121949.224:628): pid=6461 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:49.272262 kernel: audit: type=1104 audit(1757121949.228:629): pid=6461 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 6 01:25:49.229000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.20.27:22-10.200.16.10:39870 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:25:49.273259 systemd-logind[1571]: Removed session 28.