Sep 13 01:32:52.000762 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 13 01:32:52.000779 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Sep 12 23:05:37 -00 2025 Sep 13 01:32:52.000788 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Sep 13 01:32:52.000795 kernel: printk: bootconsole [pl11] enabled Sep 13 01:32:52.000800 kernel: efi: EFI v2.70 by EDK II Sep 13 01:32:52.000805 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3ead8b98 RNG=0x3fd5f998 MEMRESERVE=0x3761cf98 Sep 13 01:32:52.000812 kernel: random: crng init done Sep 13 01:32:52.000818 kernel: ACPI: Early table checksum verification disabled Sep 13 01:32:52.000823 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Sep 13 01:32:52.000829 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 01:32:52.000834 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 01:32:52.000839 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Sep 13 01:32:52.000846 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 01:32:52.000852 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 01:32:52.000858 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 01:32:52.000864 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 01:32:52.000870 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 01:32:52.000877 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 01:32:52.000883 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Sep 13 01:32:52.000889 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 01:32:52.000895 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Sep 13 01:32:52.000900 kernel: NUMA: Failed to initialise from firmware Sep 13 01:32:52.000906 kernel: NUMA: Faking a node at [mem 0x0000000000000000-0x00000001bfffffff] Sep 13 01:32:52.000912 kernel: NUMA: NODE_DATA [mem 0x1bf7f1900-0x1bf7f6fff] Sep 13 01:32:52.000917 kernel: Zone ranges: Sep 13 01:32:52.000923 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Sep 13 01:32:52.000929 kernel: DMA32 empty Sep 13 01:32:52.000934 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Sep 13 01:32:52.000941 kernel: Movable zone start for each node Sep 13 01:32:52.000947 kernel: Early memory node ranges Sep 13 01:32:52.000953 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Sep 13 01:32:52.000958 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Sep 13 01:32:52.000964 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Sep 13 01:32:52.000970 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Sep 13 01:32:52.000975 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Sep 13 01:32:52.000981 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Sep 13 01:32:52.000987 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Sep 13 01:32:52.000992 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Sep 13 01:32:52.000998 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Sep 13 01:32:52.001004 kernel: psci: probing for conduit method from ACPI. Sep 13 01:32:52.001013 kernel: psci: PSCIv1.1 detected in firmware. Sep 13 01:32:52.001019 kernel: psci: Using standard PSCI v0.2 function IDs Sep 13 01:32:52.001026 kernel: psci: MIGRATE_INFO_TYPE not supported. Sep 13 01:32:52.001032 kernel: psci: SMC Calling Convention v1.4 Sep 13 01:32:52.001038 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node -1 Sep 13 01:32:52.001045 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node -1 Sep 13 01:32:52.001051 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 Sep 13 01:32:52.001057 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 Sep 13 01:32:52.001064 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 13 01:32:52.001070 kernel: Detected PIPT I-cache on CPU0 Sep 13 01:32:52.001076 kernel: CPU features: detected: GIC system register CPU interface Sep 13 01:32:52.001082 kernel: CPU features: detected: Hardware dirty bit management Sep 13 01:32:52.001088 kernel: CPU features: detected: Spectre-BHB Sep 13 01:32:52.001094 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 13 01:32:52.001100 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 13 01:32:52.001106 kernel: CPU features: detected: ARM erratum 1418040 Sep 13 01:32:52.001113 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Sep 13 01:32:52.001119 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 13 01:32:52.001126 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Sep 13 01:32:52.001131 kernel: Policy zone: Normal Sep 13 01:32:52.001139 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=563df7b8a9b19b8c496587ae06f3c3ec1604a5105c3a3f313c9ccaa21d8055ca Sep 13 01:32:52.001145 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 01:32:52.001152 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 13 01:32:52.001158 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 01:32:52.001164 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 01:32:52.001170 kernel: software IO TLB: mapped [mem 0x000000003a550000-0x000000003e550000] (64MB) Sep 13 01:32:52.001176 kernel: Memory: 3986872K/4194160K available (9792K kernel code, 2094K rwdata, 7592K rodata, 36416K init, 777K bss, 207288K reserved, 0K cma-reserved) Sep 13 01:32:52.001184 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 13 01:32:52.001190 kernel: trace event string verifier disabled Sep 13 01:32:52.001196 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 13 01:32:52.001202 kernel: rcu: RCU event tracing is enabled. Sep 13 01:32:52.001209 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 13 01:32:52.001215 kernel: Trampoline variant of Tasks RCU enabled. Sep 13 01:32:52.001221 kernel: Tracing variant of Tasks RCU enabled. Sep 13 01:32:52.001227 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 01:32:52.001233 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 13 01:32:52.001240 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 13 01:32:52.001246 kernel: GICv3: 960 SPIs implemented Sep 13 01:32:52.001253 kernel: GICv3: 0 Extended SPIs implemented Sep 13 01:32:52.001259 kernel: GICv3: Distributor has no Range Selector support Sep 13 01:32:52.001265 kernel: Root IRQ handler: gic_handle_irq Sep 13 01:32:52.001271 kernel: GICv3: 16 PPIs implemented Sep 13 01:32:52.001277 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Sep 13 01:32:52.001283 kernel: ITS: No ITS available, not enabling LPIs Sep 13 01:32:52.001289 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 13 01:32:52.001295 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 13 01:32:52.001302 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 13 01:32:52.001308 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 13 01:32:52.001315 kernel: Console: colour dummy device 80x25 Sep 13 01:32:52.001322 kernel: printk: console [tty1] enabled Sep 13 01:32:52.001329 kernel: ACPI: Core revision 20210730 Sep 13 01:32:52.001335 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 13 01:32:52.001342 kernel: pid_max: default: 32768 minimum: 301 Sep 13 01:32:52.001348 kernel: LSM: Security Framework initializing Sep 13 01:32:52.001354 kernel: SELinux: Initializing. Sep 13 01:32:52.001360 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 01:32:52.001367 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 01:32:52.001373 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Sep 13 01:32:52.001381 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Sep 13 01:32:52.001387 kernel: rcu: Hierarchical SRCU implementation. Sep 13 01:32:52.001393 kernel: Remapping and enabling EFI services. Sep 13 01:32:52.001399 kernel: smp: Bringing up secondary CPUs ... Sep 13 01:32:52.001405 kernel: Detected PIPT I-cache on CPU1 Sep 13 01:32:52.001412 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Sep 13 01:32:52.001418 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 13 01:32:52.001424 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 13 01:32:52.001430 kernel: smp: Brought up 1 node, 2 CPUs Sep 13 01:32:52.001436 kernel: SMP: Total of 2 processors activated. Sep 13 01:32:52.001444 kernel: CPU features: detected: 32-bit EL0 Support Sep 13 01:32:52.001450 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Sep 13 01:32:52.001457 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 13 01:32:52.001510 kernel: CPU features: detected: CRC32 instructions Sep 13 01:32:52.001518 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 13 01:32:52.001524 kernel: CPU features: detected: LSE atomic instructions Sep 13 01:32:52.001530 kernel: CPU features: detected: Privileged Access Never Sep 13 01:32:52.001536 kernel: CPU: All CPU(s) started at EL1 Sep 13 01:32:52.001542 kernel: alternatives: patching kernel code Sep 13 01:32:52.001551 kernel: devtmpfs: initialized Sep 13 01:32:52.001561 kernel: KASLR enabled Sep 13 01:32:52.001568 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 01:32:52.001576 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 13 01:32:52.001582 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 01:32:52.001589 kernel: SMBIOS 3.1.0 present. Sep 13 01:32:52.001596 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Sep 13 01:32:52.001602 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 01:32:52.001609 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 13 01:32:52.001617 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 13 01:32:52.001624 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 13 01:32:52.001630 kernel: audit: initializing netlink subsys (disabled) Sep 13 01:32:52.001637 kernel: audit: type=2000 audit(0.087:1): state=initialized audit_enabled=0 res=1 Sep 13 01:32:52.001644 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 01:32:52.001650 kernel: cpuidle: using governor menu Sep 13 01:32:52.001657 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 13 01:32:52.001665 kernel: ASID allocator initialised with 32768 entries Sep 13 01:32:52.001671 kernel: ACPI: bus type PCI registered Sep 13 01:32:52.001678 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 01:32:52.001684 kernel: Serial: AMBA PL011 UART driver Sep 13 01:32:52.001691 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 01:32:52.001698 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Sep 13 01:32:52.001704 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 01:32:52.001711 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Sep 13 01:32:52.001717 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 01:32:52.001725 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 13 01:32:52.001732 kernel: ACPI: Added _OSI(Module Device) Sep 13 01:32:52.001738 kernel: ACPI: Added _OSI(Processor Device) Sep 13 01:32:52.001745 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 01:32:52.001751 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 13 01:32:52.001758 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 13 01:32:52.001764 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 13 01:32:52.001771 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 01:32:52.001778 kernel: ACPI: Interpreter enabled Sep 13 01:32:52.001785 kernel: ACPI: Using GIC for interrupt routing Sep 13 01:32:52.001792 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Sep 13 01:32:52.001799 kernel: printk: console [ttyAMA0] enabled Sep 13 01:32:52.001805 kernel: printk: bootconsole [pl11] disabled Sep 13 01:32:52.001812 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Sep 13 01:32:52.001819 kernel: iommu: Default domain type: Translated Sep 13 01:32:52.001825 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 13 01:32:52.001832 kernel: vgaarb: loaded Sep 13 01:32:52.001838 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 13 01:32:52.001845 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 13 01:32:52.001853 kernel: PTP clock support registered Sep 13 01:32:52.001860 kernel: Registered efivars operations Sep 13 01:32:52.001866 kernel: No ACPI PMU IRQ for CPU0 Sep 13 01:32:52.001873 kernel: No ACPI PMU IRQ for CPU1 Sep 13 01:32:52.001879 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 13 01:32:52.001886 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 01:32:52.001892 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 01:32:52.001899 kernel: pnp: PnP ACPI init Sep 13 01:32:52.001905 kernel: pnp: PnP ACPI: found 0 devices Sep 13 01:32:52.001913 kernel: NET: Registered PF_INET protocol family Sep 13 01:32:52.001919 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 01:32:52.001926 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 13 01:32:52.001933 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 01:32:52.001940 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 01:32:52.001947 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Sep 13 01:32:52.001954 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 13 01:32:52.001960 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 01:32:52.001968 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 01:32:52.001975 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 01:32:52.001982 kernel: PCI: CLS 0 bytes, default 64 Sep 13 01:32:52.001989 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Sep 13 01:32:52.001995 kernel: kvm [1]: HYP mode not available Sep 13 01:32:52.002002 kernel: Initialise system trusted keyrings Sep 13 01:32:52.002008 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 13 01:32:52.002015 kernel: Key type asymmetric registered Sep 13 01:32:52.002021 kernel: Asymmetric key parser 'x509' registered Sep 13 01:32:52.002029 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 13 01:32:52.002036 kernel: io scheduler mq-deadline registered Sep 13 01:32:52.002042 kernel: io scheduler kyber registered Sep 13 01:32:52.002049 kernel: io scheduler bfq registered Sep 13 01:32:52.002055 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 01:32:52.002062 kernel: thunder_xcv, ver 1.0 Sep 13 01:32:52.002069 kernel: thunder_bgx, ver 1.0 Sep 13 01:32:52.002075 kernel: nicpf, ver 1.0 Sep 13 01:32:52.002081 kernel: nicvf, ver 1.0 Sep 13 01:32:52.002196 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 13 01:32:52.002259 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-13T01:32:51 UTC (1757727171) Sep 13 01:32:52.002268 kernel: efifb: probing for efifb Sep 13 01:32:52.002276 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 13 01:32:52.002282 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 13 01:32:52.002289 kernel: efifb: scrolling: redraw Sep 13 01:32:52.002295 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 13 01:32:52.002302 kernel: Console: switching to colour frame buffer device 128x48 Sep 13 01:32:52.002310 kernel: fb0: EFI VGA frame buffer device Sep 13 01:32:52.002317 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Sep 13 01:32:52.002323 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 13 01:32:52.002330 kernel: NET: Registered PF_INET6 protocol family Sep 13 01:32:52.002337 kernel: Segment Routing with IPv6 Sep 13 01:32:52.002343 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 01:32:52.002350 kernel: NET: Registered PF_PACKET protocol family Sep 13 01:32:52.002357 kernel: Key type dns_resolver registered Sep 13 01:32:52.002363 kernel: registered taskstats version 1 Sep 13 01:32:52.002370 kernel: Loading compiled-in X.509 certificates Sep 13 01:32:52.002378 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: 47ac98e9306f36eebe4291d409359a5a5d0c2b9c' Sep 13 01:32:52.002385 kernel: Key type .fscrypt registered Sep 13 01:32:52.002391 kernel: Key type fscrypt-provisioning registered Sep 13 01:32:52.002398 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 01:32:52.002405 kernel: ima: Allocated hash algorithm: sha1 Sep 13 01:32:52.002411 kernel: ima: No architecture policies found Sep 13 01:32:52.002418 kernel: clk: Disabling unused clocks Sep 13 01:32:52.002424 kernel: Freeing unused kernel memory: 36416K Sep 13 01:32:52.002432 kernel: Run /init as init process Sep 13 01:32:52.002439 kernel: with arguments: Sep 13 01:32:52.002445 kernel: /init Sep 13 01:32:52.002452 kernel: with environment: Sep 13 01:32:52.002458 kernel: HOME=/ Sep 13 01:32:52.002473 kernel: TERM=linux Sep 13 01:32:52.002480 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 01:32:52.002489 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 01:32:52.002499 systemd[1]: Detected virtualization microsoft. Sep 13 01:32:52.002506 systemd[1]: Detected architecture arm64. Sep 13 01:32:52.002513 systemd[1]: Running in initrd. Sep 13 01:32:52.002523 systemd[1]: No hostname configured, using default hostname. Sep 13 01:32:52.002530 systemd[1]: Hostname set to . Sep 13 01:32:52.002537 systemd[1]: Initializing machine ID from random generator. Sep 13 01:32:52.002544 systemd[1]: Queued start job for default target initrd.target. Sep 13 01:32:52.002551 systemd[1]: Started systemd-ask-password-console.path. Sep 13 01:32:52.002560 systemd[1]: Reached target cryptsetup.target. Sep 13 01:32:52.002567 systemd[1]: Reached target paths.target. Sep 13 01:32:52.002574 systemd[1]: Reached target slices.target. Sep 13 01:32:52.002581 systemd[1]: Reached target swap.target. Sep 13 01:32:52.002588 systemd[1]: Reached target timers.target. Sep 13 01:32:52.002595 systemd[1]: Listening on iscsid.socket. Sep 13 01:32:52.002602 systemd[1]: Listening on iscsiuio.socket. Sep 13 01:32:52.002610 systemd[1]: Listening on systemd-journald-audit.socket. Sep 13 01:32:52.002618 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 13 01:32:52.002625 systemd[1]: Listening on systemd-journald.socket. Sep 13 01:32:52.002632 systemd[1]: Listening on systemd-networkd.socket. Sep 13 01:32:52.002639 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 01:32:52.002646 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 01:32:52.002653 systemd[1]: Reached target sockets.target. Sep 13 01:32:52.002660 systemd[1]: Starting kmod-static-nodes.service... Sep 13 01:32:52.002667 systemd[1]: Finished network-cleanup.service. Sep 13 01:32:52.002675 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 01:32:52.002683 systemd[1]: Starting systemd-journald.service... Sep 13 01:32:52.002690 systemd[1]: Starting systemd-modules-load.service... Sep 13 01:32:52.002697 systemd[1]: Starting systemd-resolved.service... Sep 13 01:32:52.002708 systemd-journald[276]: Journal started Sep 13 01:32:52.002745 systemd-journald[276]: Runtime Journal (/run/log/journal/e3b045586b3b45e6b2471f72652e3c9b) is 8.0M, max 78.5M, 70.5M free. Sep 13 01:32:52.003384 systemd-modules-load[277]: Inserted module 'overlay' Sep 13 01:32:52.026554 systemd-resolved[278]: Positive Trust Anchors: Sep 13 01:32:52.035374 systemd[1]: Starting systemd-vconsole-setup.service... Sep 13 01:32:52.030682 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 01:32:52.065593 systemd[1]: Started systemd-journald.service. Sep 13 01:32:52.065615 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 01:32:52.065627 kernel: Bridge firewalling registered Sep 13 01:32:52.030715 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 01:32:52.133703 kernel: SCSI subsystem initialized Sep 13 01:32:52.133729 kernel: audit: type=1130 audit(1757727172.102:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:52.133740 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 01:32:52.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:52.032888 systemd-resolved[278]: Defaulting to hostname 'linux'. Sep 13 01:32:52.165958 kernel: device-mapper: uevent: version 1.0.3 Sep 13 01:32:52.165978 kernel: audit: type=1130 audit(1757727172.137:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:52.165987 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 13 01:32:52.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:52.061173 systemd-modules-load[277]: Inserted module 'br_netfilter' Sep 13 01:32:52.193533 kernel: audit: type=1130 audit(1757727172.169:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:52.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:52.102948 systemd[1]: Started systemd-resolved.service. Sep 13 01:32:52.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:52.156458 systemd[1]: Finished kmod-static-nodes.service. Sep 13 01:32:52.170379 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 01:32:52.263518 kernel: audit: type=1130 audit(1757727172.193:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:52.263546 kernel: audit: type=1130 audit(1757727172.197:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:52.263555 kernel: audit: type=1130 audit(1757727172.220:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:52.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:52.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:52.174834 systemd-modules-load[277]: Inserted module 'dm_multipath' Sep 13 01:32:52.193572 systemd[1]: Finished systemd-modules-load.service. Sep 13 01:32:52.198278 systemd[1]: Finished systemd-vconsole-setup.service. Sep 13 01:32:52.221362 systemd[1]: Reached target nss-lookup.target. Sep 13 01:32:52.263210 systemd[1]: Starting dracut-cmdline-ask.service... Sep 13 01:32:52.268284 systemd[1]: Starting systemd-sysctl.service... Sep 13 01:32:52.290369 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 13 01:32:52.308545 systemd[1]: Finished dracut-cmdline-ask.service. Sep 13 01:32:52.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:52.316960 systemd[1]: Finished systemd-sysctl.service. Sep 13 01:32:52.342359 kernel: audit: type=1130 audit(1757727172.316:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:52.339007 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 13 01:32:52.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:52.363729 systemd[1]: Starting dracut-cmdline.service... Sep 13 01:32:52.389957 kernel: audit: type=1130 audit(1757727172.338:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:52.390000 kernel: audit: type=1130 audit(1757727172.362:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:52.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:52.395458 dracut-cmdline[300]: dracut-dracut-053 Sep 13 01:32:52.401162 dracut-cmdline[300]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=563df7b8a9b19b8c496587ae06f3c3ec1604a5105c3a3f313c9ccaa21d8055ca Sep 13 01:32:52.475494 kernel: Loading iSCSI transport class v2.0-870. Sep 13 01:32:52.490490 kernel: iscsi: registered transport (tcp) Sep 13 01:32:52.510689 kernel: iscsi: registered transport (qla4xxx) Sep 13 01:32:52.510751 kernel: QLogic iSCSI HBA Driver Sep 13 01:32:52.540533 systemd[1]: Finished dracut-cmdline.service. Sep 13 01:32:52.544000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:52.545740 systemd[1]: Starting dracut-pre-udev.service... Sep 13 01:32:52.597482 kernel: raid6: neonx8 gen() 13737 MB/s Sep 13 01:32:52.618476 kernel: raid6: neonx8 xor() 10829 MB/s Sep 13 01:32:52.638473 kernel: raid6: neonx4 gen() 13505 MB/s Sep 13 01:32:52.658472 kernel: raid6: neonx4 xor() 11057 MB/s Sep 13 01:32:52.680473 kernel: raid6: neonx2 gen() 12952 MB/s Sep 13 01:32:52.700472 kernel: raid6: neonx2 xor() 10367 MB/s Sep 13 01:32:52.720473 kernel: raid6: neonx1 gen() 10458 MB/s Sep 13 01:32:52.741473 kernel: raid6: neonx1 xor() 8783 MB/s Sep 13 01:32:52.761472 kernel: raid6: int64x8 gen() 6273 MB/s Sep 13 01:32:52.781472 kernel: raid6: int64x8 xor() 3546 MB/s Sep 13 01:32:52.802473 kernel: raid6: int64x4 gen() 7239 MB/s Sep 13 01:32:52.822472 kernel: raid6: int64x4 xor() 3855 MB/s Sep 13 01:32:52.842472 kernel: raid6: int64x2 gen() 6155 MB/s Sep 13 01:32:52.863473 kernel: raid6: int64x2 xor() 3321 MB/s Sep 13 01:32:52.883472 kernel: raid6: int64x1 gen() 5047 MB/s Sep 13 01:32:52.907498 kernel: raid6: int64x1 xor() 2648 MB/s Sep 13 01:32:52.907510 kernel: raid6: using algorithm neonx8 gen() 13737 MB/s Sep 13 01:32:52.907519 kernel: raid6: .... xor() 10829 MB/s, rmw enabled Sep 13 01:32:52.911458 kernel: raid6: using neon recovery algorithm Sep 13 01:32:52.933166 kernel: xor: measuring software checksum speed Sep 13 01:32:52.933178 kernel: 8regs : 17227 MB/sec Sep 13 01:32:52.936850 kernel: 32regs : 20686 MB/sec Sep 13 01:32:52.940448 kernel: arm64_neon : 27965 MB/sec Sep 13 01:32:52.940475 kernel: xor: using function: arm64_neon (27965 MB/sec) Sep 13 01:32:53.000482 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Sep 13 01:32:53.009389 systemd[1]: Finished dracut-pre-udev.service. Sep 13 01:32:53.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:53.017000 audit: BPF prog-id=7 op=LOAD Sep 13 01:32:53.017000 audit: BPF prog-id=8 op=LOAD Sep 13 01:32:53.018083 systemd[1]: Starting systemd-udevd.service... Sep 13 01:32:53.035362 systemd-udevd[477]: Using default interface naming scheme 'v252'. Sep 13 01:32:53.041063 systemd[1]: Started systemd-udevd.service. Sep 13 01:32:53.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:53.050871 systemd[1]: Starting dracut-pre-trigger.service... Sep 13 01:32:53.066659 dracut-pre-trigger[497]: rd.md=0: removing MD RAID activation Sep 13 01:32:53.094881 systemd[1]: Finished dracut-pre-trigger.service. Sep 13 01:32:53.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:53.100132 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 01:32:53.140673 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 01:32:53.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:53.192484 kernel: hv_vmbus: Vmbus version:5.3 Sep 13 01:32:53.206497 kernel: hv_vmbus: registering driver hid_hyperv Sep 13 01:32:53.206544 kernel: hv_vmbus: registering driver hv_netvsc Sep 13 01:32:53.214498 kernel: hv_vmbus: registering driver hv_storvsc Sep 13 01:32:53.226072 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 13 01:32:53.226109 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Sep 13 01:32:53.236371 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 13 01:32:53.236678 kernel: scsi host0: storvsc_host_t Sep 13 01:32:53.251592 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Sep 13 01:32:53.251634 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Sep 13 01:32:53.261307 kernel: scsi host1: storvsc_host_t Sep 13 01:32:53.268482 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Sep 13 01:32:53.288987 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Sep 13 01:32:53.290554 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 13 01:32:53.290573 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Sep 13 01:32:53.310251 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Sep 13 01:32:53.336336 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Sep 13 01:32:53.336443 kernel: hv_netvsc 0022487a-6966-0022-487a-69660022487a eth0: VF slot 1 added Sep 13 01:32:53.336569 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 13 01:32:53.336667 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Sep 13 01:32:53.336744 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Sep 13 01:32:53.336827 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 01:32:53.336836 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 13 01:32:53.350446 kernel: hv_vmbus: registering driver hv_pci Sep 13 01:32:53.350566 kernel: hv_pci 62e33025-a098-46ed-ab6f-70a349f93955: PCI VMBus probing: Using version 0x10004 Sep 13 01:32:53.432013 kernel: hv_pci 62e33025-a098-46ed-ab6f-70a349f93955: PCI host bridge to bus a098:00 Sep 13 01:32:53.432112 kernel: pci_bus a098:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Sep 13 01:32:53.432208 kernel: pci_bus a098:00: No busn resource found for root bus, will use [bus 00-ff] Sep 13 01:32:53.432279 kernel: pci a098:00:02.0: [15b3:1018] type 00 class 0x020000 Sep 13 01:32:53.432370 kernel: pci a098:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 13 01:32:53.432447 kernel: pci a098:00:02.0: enabling Extended Tags Sep 13 01:32:53.432579 kernel: pci a098:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at a098:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Sep 13 01:32:53.432661 kernel: pci_bus a098:00: busn_res: [bus 00-ff] end is updated to 00 Sep 13 01:32:53.432733 kernel: pci a098:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 13 01:32:53.470547 kernel: mlx5_core a098:00:02.0: enabling device (0000 -> 0002) Sep 13 01:32:53.700415 kernel: mlx5_core a098:00:02.0: firmware version: 16.30.1284 Sep 13 01:32:53.700548 kernel: mlx5_core a098:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0) Sep 13 01:32:53.700631 kernel: hv_netvsc 0022487a-6966-0022-487a-69660022487a eth0: VF registering: eth1 Sep 13 01:32:53.700711 kernel: mlx5_core a098:00:02.0 eth1: joined to eth0 Sep 13 01:32:53.709490 kernel: mlx5_core a098:00:02.0 enP41112s1: renamed from eth1 Sep 13 01:32:53.826149 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 13 01:32:53.833158 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (533) Sep 13 01:32:53.843185 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 01:32:54.068633 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 13 01:32:54.096925 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 13 01:32:54.103111 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 13 01:32:54.114914 systemd[1]: Starting disk-uuid.service... Sep 13 01:32:54.142493 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 01:32:54.152491 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 01:32:55.162891 disk-uuid[607]: The operation has completed successfully. Sep 13 01:32:55.167787 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 01:32:55.231190 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 01:32:55.235606 systemd[1]: Finished disk-uuid.service. Sep 13 01:32:55.239000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:55.239000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:55.248425 systemd[1]: Starting verity-setup.service... Sep 13 01:32:55.300777 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 13 01:32:55.695083 systemd[1]: Found device dev-mapper-usr.device. Sep 13 01:32:55.705918 systemd[1]: Finished verity-setup.service. Sep 13 01:32:55.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:55.710855 systemd[1]: Mounting sysusr-usr.mount... Sep 13 01:32:55.774491 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 13 01:32:55.774897 systemd[1]: Mounted sysusr-usr.mount. Sep 13 01:32:55.779005 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 13 01:32:55.779766 systemd[1]: Starting ignition-setup.service... Sep 13 01:32:55.795540 systemd[1]: Starting parse-ip-for-networkd.service... Sep 13 01:32:55.828883 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 13 01:32:55.828929 kernel: BTRFS info (device sda6): using free space tree Sep 13 01:32:55.833450 kernel: BTRFS info (device sda6): has skinny extents Sep 13 01:32:55.865568 systemd[1]: Finished parse-ip-for-networkd.service. Sep 13 01:32:55.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:55.874000 audit: BPF prog-id=9 op=LOAD Sep 13 01:32:55.874973 systemd[1]: Starting systemd-networkd.service... Sep 13 01:32:55.896707 systemd-networkd[871]: lo: Link UP Sep 13 01:32:55.896718 systemd-networkd[871]: lo: Gained carrier Sep 13 01:32:55.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:55.897145 systemd-networkd[871]: Enumeration completed Sep 13 01:32:55.897236 systemd[1]: Started systemd-networkd.service. Sep 13 01:32:55.904822 systemd[1]: Reached target network.target. Sep 13 01:32:55.908407 systemd-networkd[871]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 01:32:55.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:55.916940 systemd[1]: Starting iscsiuio.service... Sep 13 01:32:55.925015 systemd[1]: Started iscsiuio.service. Sep 13 01:32:55.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:55.933424 systemd[1]: Starting iscsid.service... Sep 13 01:32:55.966216 iscsid[878]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 13 01:32:55.966216 iscsid[878]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Sep 13 01:32:55.966216 iscsid[878]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 13 01:32:55.966216 iscsid[878]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 13 01:32:55.966216 iscsid[878]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 13 01:32:55.966216 iscsid[878]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 13 01:32:55.966216 iscsid[878]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 13 01:32:56.079169 kernel: kauditd_printk_skb: 15 callbacks suppressed Sep 13 01:32:56.079197 kernel: audit: type=1130 audit(1757727176.009:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:56.079208 kernel: mlx5_core a098:00:02.0 enP41112s1: Link up Sep 13 01:32:56.079357 kernel: buffer_size[0]=0 is not enough for lossless buffer Sep 13 01:32:56.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:55.947673 systemd[1]: Started iscsid.service. Sep 13 01:32:55.956181 systemd[1]: Starting dracut-initqueue.service... Sep 13 01:32:55.992253 systemd[1]: Finished dracut-initqueue.service. Sep 13 01:32:56.125271 kernel: hv_netvsc 0022487a-6966-0022-487a-69660022487a eth0: Data path switched to VF: enP41112s1 Sep 13 01:32:56.125440 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 01:32:56.125451 kernel: audit: type=1130 audit(1757727176.110:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:56.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:56.010099 systemd[1]: Reached target remote-fs-pre.target. Sep 13 01:32:56.045073 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 01:32:56.049897 systemd[1]: Reached target remote-fs.target. Sep 13 01:32:56.073220 systemd[1]: Starting dracut-pre-mount.service... Sep 13 01:32:56.087311 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 01:32:56.092641 systemd[1]: Finished dracut-pre-mount.service. Sep 13 01:32:56.103593 systemd-networkd[871]: enP41112s1: Link UP Sep 13 01:32:56.103704 systemd-networkd[871]: eth0: Link UP Sep 13 01:32:56.125897 systemd-networkd[871]: eth0: Gained carrier Sep 13 01:32:56.135649 systemd-networkd[871]: enP41112s1: Gained carrier Sep 13 01:32:56.152799 systemd-networkd[871]: eth0: DHCPv4 address 10.200.20.24/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 13 01:32:56.923554 systemd[1]: Finished ignition-setup.service. Sep 13 01:32:56.928937 systemd[1]: Starting ignition-fetch-offline.service... Sep 13 01:32:56.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:56.952481 kernel: audit: type=1130 audit(1757727176.927:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:58.053632 systemd-networkd[871]: eth0: Gained IPv6LL Sep 13 01:33:00.311842 ignition[898]: Ignition 2.14.0 Sep 13 01:33:00.311855 ignition[898]: Stage: fetch-offline Sep 13 01:33:00.311952 ignition[898]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 01:33:00.312009 ignition[898]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 13 01:33:00.442675 ignition[898]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 01:33:00.442812 ignition[898]: parsed url from cmdline: "" Sep 13 01:33:00.442816 ignition[898]: no config URL provided Sep 13 01:33:00.442821 ignition[898]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 01:33:00.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:00.449361 systemd[1]: Finished ignition-fetch-offline.service. Sep 13 01:33:00.489063 kernel: audit: type=1130 audit(1757727180.457:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:00.442829 ignition[898]: no config at "/usr/lib/ignition/user.ign" Sep 13 01:33:00.458533 systemd[1]: Starting ignition-fetch.service... Sep 13 01:33:00.442835 ignition[898]: failed to fetch config: resource requires networking Sep 13 01:33:00.443063 ignition[898]: Ignition finished successfully Sep 13 01:33:00.476429 ignition[904]: Ignition 2.14.0 Sep 13 01:33:00.476435 ignition[904]: Stage: fetch Sep 13 01:33:00.476567 ignition[904]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 01:33:00.476593 ignition[904]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 13 01:33:00.479504 ignition[904]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 01:33:00.479904 ignition[904]: parsed url from cmdline: "" Sep 13 01:33:00.479908 ignition[904]: no config URL provided Sep 13 01:33:00.479915 ignition[904]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 01:33:00.479933 ignition[904]: no config at "/usr/lib/ignition/user.ign" Sep 13 01:33:00.479962 ignition[904]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 13 01:33:00.578226 ignition[904]: GET result: OK Sep 13 01:33:00.578332 ignition[904]: config has been read from IMDS userdata Sep 13 01:33:00.578398 ignition[904]: parsing config with SHA512: 9a03bdc5b6d4b2d48ef17988449fcd016de53e37719c70db02453dc3926fb89f303940a29ef0bcdc59944f01763214799f0bc59d7d434af67c810be21eaaf1c0 Sep 13 01:33:00.582492 unknown[904]: fetched base config from "system" Sep 13 01:33:00.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:00.583063 ignition[904]: fetch: fetch complete Sep 13 01:33:00.615340 kernel: audit: type=1130 audit(1757727180.590:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:00.582500 unknown[904]: fetched base config from "system" Sep 13 01:33:00.583068 ignition[904]: fetch: fetch passed Sep 13 01:33:00.582510 unknown[904]: fetched user config from "azure" Sep 13 01:33:00.583108 ignition[904]: Ignition finished successfully Sep 13 01:33:00.586887 systemd[1]: Finished ignition-fetch.service. Sep 13 01:33:00.630807 ignition[910]: Ignition 2.14.0 Sep 13 01:33:00.610008 systemd[1]: Starting ignition-kargs.service... Sep 13 01:33:00.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:00.630813 ignition[910]: Stage: kargs Sep 13 01:33:00.672138 kernel: audit: type=1130 audit(1757727180.645:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:00.638392 systemd[1]: Finished ignition-kargs.service. Sep 13 01:33:00.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:00.630921 ignition[910]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 01:33:00.703920 kernel: audit: type=1130 audit(1757727180.676:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:00.646398 systemd[1]: Starting ignition-disks.service... Sep 13 01:33:00.630944 ignition[910]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 13 01:33:00.670748 systemd[1]: Finished ignition-disks.service. Sep 13 01:33:00.635610 ignition[910]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 01:33:00.676541 systemd[1]: Reached target initrd-root-device.target. Sep 13 01:33:00.637354 ignition[910]: kargs: kargs passed Sep 13 01:33:00.699583 systemd[1]: Reached target local-fs-pre.target. Sep 13 01:33:00.637414 ignition[910]: Ignition finished successfully Sep 13 01:33:00.708548 systemd[1]: Reached target local-fs.target. Sep 13 01:33:00.655981 ignition[916]: Ignition 2.14.0 Sep 13 01:33:00.716677 systemd[1]: Reached target sysinit.target. Sep 13 01:33:00.655988 ignition[916]: Stage: disks Sep 13 01:33:00.724579 systemd[1]: Reached target basic.target. Sep 13 01:33:00.656097 ignition[916]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 01:33:00.738445 systemd[1]: Starting systemd-fsck-root.service... Sep 13 01:33:00.656117 ignition[916]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 13 01:33:00.658866 ignition[916]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 01:33:00.665315 ignition[916]: disks: disks passed Sep 13 01:33:00.665375 ignition[916]: Ignition finished successfully Sep 13 01:33:00.854695 systemd-fsck[924]: ROOT: clean, 629/7326000 files, 481083/7359488 blocks Sep 13 01:33:00.888855 systemd[1]: Finished systemd-fsck-root.service. Sep 13 01:33:00.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:00.916307 kernel: audit: type=1130 audit(1757727180.893:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:00.916165 systemd[1]: Mounting sysroot.mount... Sep 13 01:33:00.941495 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 13 01:33:00.941849 systemd[1]: Mounted sysroot.mount. Sep 13 01:33:00.945750 systemd[1]: Reached target initrd-root-fs.target. Sep 13 01:33:00.985185 systemd[1]: Mounting sysroot-usr.mount... Sep 13 01:33:00.989868 systemd[1]: Starting flatcar-metadata-hostname.service... Sep 13 01:33:00.998574 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 01:33:00.998608 systemd[1]: Reached target ignition-diskful.target. Sep 13 01:33:01.004303 systemd[1]: Mounted sysroot-usr.mount. Sep 13 01:33:01.091217 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 13 01:33:01.096084 systemd[1]: Starting initrd-setup-root.service... Sep 13 01:33:01.119483 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (935) Sep 13 01:33:01.132500 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 13 01:33:01.132535 kernel: BTRFS info (device sda6): using free space tree Sep 13 01:33:01.132546 initrd-setup-root[940]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 01:33:01.143401 kernel: BTRFS info (device sda6): has skinny extents Sep 13 01:33:01.150516 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 13 01:33:01.171181 initrd-setup-root[966]: cut: /sysroot/etc/group: No such file or directory Sep 13 01:33:01.195673 initrd-setup-root[974]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 01:33:01.219117 initrd-setup-root[982]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 01:33:02.181263 systemd[1]: Finished initrd-setup-root.service. Sep 13 01:33:02.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:02.204672 systemd[1]: Starting ignition-mount.service... Sep 13 01:33:02.214294 kernel: audit: type=1130 audit(1757727182.185:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:02.213575 systemd[1]: Starting sysroot-boot.service... Sep 13 01:33:02.222746 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Sep 13 01:33:02.222865 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Sep 13 01:33:02.243093 systemd[1]: Finished sysroot-boot.service. Sep 13 01:33:02.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:02.267553 kernel: audit: type=1130 audit(1757727182.246:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:02.281787 ignition[1005]: INFO : Ignition 2.14.0 Sep 13 01:33:02.281787 ignition[1005]: INFO : Stage: mount Sep 13 01:33:02.291183 ignition[1005]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 01:33:02.291183 ignition[1005]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 13 01:33:02.291183 ignition[1005]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 01:33:02.291183 ignition[1005]: INFO : mount: mount passed Sep 13 01:33:02.291183 ignition[1005]: INFO : Ignition finished successfully Sep 13 01:33:02.350064 kernel: audit: type=1130 audit(1757727182.304:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:02.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:02.294273 systemd[1]: Finished ignition-mount.service. Sep 13 01:33:02.735407 coreos-metadata[934]: Sep 13 01:33:02.735 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 13 01:33:02.930599 coreos-metadata[934]: Sep 13 01:33:02.930 INFO Fetch successful Sep 13 01:33:02.964723 coreos-metadata[934]: Sep 13 01:33:02.964 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 13 01:33:03.028557 coreos-metadata[934]: Sep 13 01:33:03.028 INFO Fetch successful Sep 13 01:33:03.047529 coreos-metadata[934]: Sep 13 01:33:03.047 INFO wrote hostname ci-3510.3.8-n-9d226ffbbf to /sysroot/etc/hostname Sep 13 01:33:03.056373 systemd[1]: Finished flatcar-metadata-hostname.service. Sep 13 01:33:03.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:03.062268 systemd[1]: Starting ignition-files.service... Sep 13 01:33:03.087822 kernel: audit: type=1130 audit(1757727183.061:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:03.087073 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 13 01:33:03.108482 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1013) Sep 13 01:33:03.122333 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 13 01:33:03.122375 kernel: BTRFS info (device sda6): using free space tree Sep 13 01:33:03.122386 kernel: BTRFS info (device sda6): has skinny extents Sep 13 01:33:03.134757 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 13 01:33:03.147723 ignition[1032]: INFO : Ignition 2.14.0 Sep 13 01:33:03.147723 ignition[1032]: INFO : Stage: files Sep 13 01:33:03.158770 ignition[1032]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 01:33:03.158770 ignition[1032]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 13 01:33:03.158770 ignition[1032]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 01:33:03.158770 ignition[1032]: DEBUG : files: compiled without relabeling support, skipping Sep 13 01:33:03.190823 ignition[1032]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 01:33:03.190823 ignition[1032]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 01:33:03.271837 ignition[1032]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 01:33:03.279586 ignition[1032]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 01:33:03.287627 unknown[1032]: wrote ssh authorized keys file for user: core Sep 13 01:33:03.293350 ignition[1032]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 01:33:03.303687 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 13 01:33:03.303687 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 13 01:33:03.303687 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 13 01:33:03.303687 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 13 01:33:03.398014 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 13 01:33:03.793684 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 13 01:33:03.804789 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 13 01:33:03.804789 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 01:33:03.804789 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 01:33:03.804789 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 01:33:03.804789 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 01:33:03.804789 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 01:33:03.804789 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 01:33:03.804789 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 01:33:03.882460 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 01:33:03.882460 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 01:33:03.882460 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 13 01:33:03.882460 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 13 01:33:03.882460 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Sep 13 01:33:03.882460 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition Sep 13 01:33:03.882460 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4213148250" Sep 13 01:33:03.882460 ignition[1032]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4213148250": device or resource busy Sep 13 01:33:03.882460 ignition[1032]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem4213148250", trying btrfs: device or resource busy Sep 13 01:33:03.882460 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4213148250" Sep 13 01:33:03.882460 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4213148250" Sep 13 01:33:03.882460 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem4213148250" Sep 13 01:33:03.882460 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem4213148250" Sep 13 01:33:03.882460 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Sep 13 01:33:03.872479 systemd[1]: mnt-oem4213148250.mount: Deactivated successfully. Sep 13 01:33:04.040091 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Sep 13 01:33:04.040091 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(f): oem config not found in "/usr/share/oem", looking on oem partition Sep 13 01:33:04.040091 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(10): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1503641091" Sep 13 01:33:04.040091 ignition[1032]: CRITICAL : files: createFilesystemsFiles: createFiles: op(f): op(10): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1503641091": device or resource busy Sep 13 01:33:04.040091 ignition[1032]: ERROR : files: createFilesystemsFiles: createFiles: op(f): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1503641091", trying btrfs: device or resource busy Sep 13 01:33:04.040091 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1503641091" Sep 13 01:33:04.040091 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1503641091" Sep 13 01:33:04.040091 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [started] unmounting "/mnt/oem1503641091" Sep 13 01:33:04.040091 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [finished] unmounting "/mnt/oem1503641091" Sep 13 01:33:04.040091 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Sep 13 01:33:04.040091 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 13 01:33:04.040091 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 13 01:33:04.402551 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET result: OK Sep 13 01:33:04.645723 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 13 01:33:04.645723 ignition[1032]: INFO : files: op(14): [started] processing unit "waagent.service" Sep 13 01:33:04.645723 ignition[1032]: INFO : files: op(14): [finished] processing unit "waagent.service" Sep 13 01:33:04.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:04.690970 ignition[1032]: INFO : files: op(15): [started] processing unit "nvidia.service" Sep 13 01:33:04.690970 ignition[1032]: INFO : files: op(15): [finished] processing unit "nvidia.service" Sep 13 01:33:04.690970 ignition[1032]: INFO : files: op(16): [started] processing unit "containerd.service" Sep 13 01:33:04.690970 ignition[1032]: INFO : files: op(16): op(17): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 13 01:33:04.690970 ignition[1032]: INFO : files: op(16): op(17): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 13 01:33:04.690970 ignition[1032]: INFO : files: op(16): [finished] processing unit "containerd.service" Sep 13 01:33:04.690970 ignition[1032]: INFO : files: op(18): [started] processing unit "prepare-helm.service" Sep 13 01:33:04.690970 ignition[1032]: INFO : files: op(18): op(19): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 01:33:04.690970 ignition[1032]: INFO : files: op(18): op(19): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 01:33:04.690970 ignition[1032]: INFO : files: op(18): [finished] processing unit "prepare-helm.service" Sep 13 01:33:04.690970 ignition[1032]: INFO : files: op(1a): [started] setting preset to enabled for "waagent.service" Sep 13 01:33:04.690970 ignition[1032]: INFO : files: op(1a): [finished] setting preset to enabled for "waagent.service" Sep 13 01:33:04.690970 ignition[1032]: INFO : files: op(1b): [started] setting preset to enabled for "nvidia.service" Sep 13 01:33:04.690970 ignition[1032]: INFO : files: op(1b): [finished] setting preset to enabled for "nvidia.service" Sep 13 01:33:04.690970 ignition[1032]: INFO : files: op(1c): [started] setting preset to enabled for "prepare-helm.service" Sep 13 01:33:04.690970 ignition[1032]: INFO : files: op(1c): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 01:33:04.690970 ignition[1032]: INFO : files: createResultFile: createFiles: op(1d): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 01:33:04.690970 ignition[1032]: INFO : files: createResultFile: createFiles: op(1d): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 01:33:04.690970 ignition[1032]: INFO : files: files passed Sep 13 01:33:04.690970 ignition[1032]: INFO : Ignition finished successfully Sep 13 01:33:04.999176 kernel: audit: type=1130 audit(1757727184.669:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:04.999203 kernel: audit: type=1130 audit(1757727184.767:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:04.999213 kernel: audit: type=1131 audit(1757727184.767:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:04.999223 kernel: audit: type=1130 audit(1757727184.809:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:04.999239 kernel: audit: type=1130 audit(1757727184.892:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:04.999252 kernel: audit: type=1131 audit(1757727184.915:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:04.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:04.767000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:04.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:04.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:04.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:04.659049 systemd[1]: Finished ignition-files.service. Sep 13 01:33:04.695755 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 13 01:33:04.706344 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 13 01:33:05.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:05.030232 initrd-setup-root-after-ignition[1057]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 01:33:04.707205 systemd[1]: Starting ignition-quench.service... Sep 13 01:33:04.728630 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 01:33:04.729109 systemd[1]: Finished ignition-quench.service. Sep 13 01:33:04.767949 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 13 01:33:05.075000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:04.809703 systemd[1]: Reached target ignition-complete.target. Sep 13 01:33:04.839910 systemd[1]: Starting initrd-parse-etc.service... Sep 13 01:33:04.881721 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 01:33:04.881844 systemd[1]: Finished initrd-parse-etc.service. Sep 13 01:33:04.916008 systemd[1]: Reached target initrd-fs.target. Sep 13 01:33:04.941445 systemd[1]: Reached target initrd.target. Sep 13 01:33:04.954744 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 13 01:33:04.955626 systemd[1]: Starting dracut-pre-pivot.service... Sep 13 01:33:05.004561 systemd[1]: Finished dracut-pre-pivot.service. Sep 13 01:33:05.015502 systemd[1]: Starting initrd-cleanup.service... Sep 13 01:33:05.043845 systemd[1]: Stopped target nss-lookup.target. Sep 13 01:33:05.048611 systemd[1]: Stopped target remote-cryptsetup.target. Sep 13 01:33:05.180000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:05.058240 systemd[1]: Stopped target timers.target. Sep 13 01:33:05.067591 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 01:33:05.196000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:05.067698 systemd[1]: Stopped dracut-pre-pivot.service. Sep 13 01:33:05.207000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:05.076191 systemd[1]: Stopped target initrd.target. Sep 13 01:33:05.214000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:05.085596 systemd[1]: Stopped target basic.target. Sep 13 01:33:05.223000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:05.093953 systemd[1]: Stopped target ignition-complete.target. Sep 13 01:33:05.103246 systemd[1]: Stopped target ignition-diskful.target. Sep 13 01:33:05.242426 iscsid[878]: iscsid shutting down. Sep 13 01:33:05.249000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:05.113205 systemd[1]: Stopped target initrd-root-device.target. Sep 13 01:33:05.260693 ignition[1070]: INFO : Ignition 2.14.0 Sep 13 01:33:05.260693 ignition[1070]: INFO : Stage: umount Sep 13 01:33:05.260693 ignition[1070]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 01:33:05.260693 ignition[1070]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 13 01:33:05.260693 ignition[1070]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 01:33:05.260693 ignition[1070]: INFO : umount: umount passed Sep 13 01:33:05.260693 ignition[1070]: INFO : Ignition finished successfully Sep 13 01:33:05.268000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:05.278000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:05.285000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:05.310000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:05.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:05.322000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:05.332000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:05.122024 systemd[1]: Stopped target remote-fs.target. Sep 13 01:33:05.341000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:05.130178 systemd[1]: Stopped target remote-fs-pre.target. Sep 13 01:33:05.350000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:05.138198 systemd[1]: Stopped target sysinit.target. Sep 13 01:33:05.359000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:05.148740 systemd[1]: Stopped target local-fs.target. Sep 13 01:33:05.156660 systemd[1]: Stopped target local-fs-pre.target. Sep 13 01:33:05.164810 systemd[1]: Stopped target swap.target. Sep 13 01:33:05.172350 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 01:33:05.172470 systemd[1]: Stopped dracut-pre-mount.service. Sep 13 01:33:05.180855 systemd[1]: Stopped target cryptsetup.target. Sep 13 01:33:05.407000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:05.189175 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 01:33:05.189266 systemd[1]: Stopped dracut-initqueue.service. Sep 13 01:33:05.426000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:05.197134 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 01:33:05.197230 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 13 01:33:05.207488 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 01:33:05.207591 systemd[1]: Stopped ignition-files.service. Sep 13 01:33:05.215275 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 13 01:33:05.470000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:05.215366 systemd[1]: Stopped flatcar-metadata-hostname.service. Sep 13 01:33:05.224756 systemd[1]: Stopping ignition-mount.service... Sep 13 01:33:05.494000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:05.241636 systemd[1]: Stopping iscsid.service... Sep 13 01:33:05.502000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:05.245667 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 01:33:05.511000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:05.245803 systemd[1]: Stopped kmod-static-nodes.service. Sep 13 01:33:05.251878 systemd[1]: Stopping sysroot-boot.service... Sep 13 01:33:05.530000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:05.264542 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 01:33:05.538000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:05.264692 systemd[1]: Stopped systemd-udev-trigger.service. Sep 13 01:33:05.547000 audit: BPF prog-id=6 op=UNLOAD Sep 13 01:33:05.269325 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 01:33:05.561000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:05.269410 systemd[1]: Stopped dracut-pre-trigger.service. Sep 13 01:33:05.570000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:05.281292 systemd[1]: iscsid.service: Deactivated successfully. Sep 13 01:33:05.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:05.281430 systemd[1]: Stopped iscsid.service. Sep 13 01:33:05.593000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:05.286354 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 01:33:05.616222 kernel: hv_netvsc 0022487a-6966-0022-487a-69660022487a eth0: Data path switched from VF: enP41112s1 Sep 13 01:33:05.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:05.610000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:05.286522 systemd[1]: Stopped ignition-mount.service. Sep 13 01:33:05.312102 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 01:33:05.624000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:05.313613 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 01:33:05.633000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:05.313705 systemd[1]: Finished initrd-cleanup.service. Sep 13 01:33:05.323758 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 01:33:05.323822 systemd[1]: Stopped ignition-disks.service. Sep 13 01:33:05.332451 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 01:33:05.332601 systemd[1]: Stopped ignition-kargs.service. Sep 13 01:33:05.342011 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 13 01:33:05.342062 systemd[1]: Stopped ignition-fetch.service. Sep 13 01:33:05.351133 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 01:33:05.351174 systemd[1]: Stopped ignition-fetch-offline.service. Sep 13 01:33:05.359835 systemd[1]: Stopped target paths.target. Sep 13 01:33:05.368183 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 01:33:05.371488 systemd[1]: Stopped systemd-ask-password-console.path. Sep 13 01:33:05.376426 systemd[1]: Stopped target slices.target. Sep 13 01:33:05.384405 systemd[1]: Stopped target sockets.target. Sep 13 01:33:05.391796 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 01:33:05.391846 systemd[1]: Closed iscsid.socket. Sep 13 01:33:05.400092 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 01:33:05.400134 systemd[1]: Stopped ignition-setup.service. Sep 13 01:33:05.408351 systemd[1]: Stopping iscsiuio.service... Sep 13 01:33:05.416643 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 13 01:33:05.416740 systemd[1]: Stopped iscsiuio.service. Sep 13 01:33:05.426974 systemd[1]: Stopped target network.target. Sep 13 01:33:05.732000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:05.434612 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 01:33:05.434643 systemd[1]: Closed iscsiuio.socket. Sep 13 01:33:05.444267 systemd[1]: Stopping systemd-networkd.service... Sep 13 01:33:05.452899 systemd[1]: Stopping systemd-resolved.service... Sep 13 01:33:05.462094 systemd-networkd[871]: eth0: DHCPv6 lease lost Sep 13 01:33:05.762000 audit: BPF prog-id=9 op=UNLOAD Sep 13 01:33:05.464648 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 01:33:05.464753 systemd[1]: Stopped systemd-networkd.service. Sep 13 01:33:05.471364 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 01:33:05.471397 systemd[1]: Closed systemd-networkd.socket. Sep 13 01:33:05.481837 systemd[1]: Stopping network-cleanup.service... Sep 13 01:33:05.489722 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 01:33:05.489787 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 13 01:33:05.494976 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 01:33:05.495019 systemd[1]: Stopped systemd-sysctl.service. Sep 13 01:33:05.506064 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 01:33:05.506102 systemd[1]: Stopped systemd-modules-load.service. Sep 13 01:33:05.511715 systemd[1]: Stopping systemd-udevd.service... Sep 13 01:33:05.521356 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 13 01:33:05.521878 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 01:33:05.521993 systemd[1]: Stopped systemd-resolved.service. Sep 13 01:33:05.531131 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 01:33:05.531254 systemd[1]: Stopped systemd-udevd.service. Sep 13 01:33:05.539531 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 01:33:05.539577 systemd[1]: Closed systemd-udevd-control.socket. Sep 13 01:33:05.548693 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 01:33:05.548725 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 13 01:33:05.553210 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 01:33:05.553257 systemd[1]: Stopped dracut-pre-udev.service. Sep 13 01:33:05.562003 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 01:33:05.562043 systemd[1]: Stopped dracut-cmdline.service. Sep 13 01:33:05.571235 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 01:33:05.571275 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 13 01:33:05.580301 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 13 01:33:05.588742 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 01:33:05.588798 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 13 01:33:05.596750 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 01:33:05.596873 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 13 01:33:05.616611 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 01:33:05.616713 systemd[1]: Stopped sysroot-boot.service. Sep 13 01:33:05.624536 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 01:33:05.624582 systemd[1]: Stopped initrd-setup-root.service. Sep 13 01:33:05.725003 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 01:33:05.725105 systemd[1]: Stopped network-cleanup.service. Sep 13 01:33:05.732486 systemd[1]: Reached target initrd-switch-root.target. Sep 13 01:33:05.741370 systemd[1]: Starting initrd-switch-root.service... Sep 13 01:33:06.002433 systemd[1]: Switching root. Sep 13 01:33:06.002000 audit: BPF prog-id=5 op=UNLOAD Sep 13 01:33:06.002000 audit: BPF prog-id=4 op=UNLOAD Sep 13 01:33:06.002000 audit: BPF prog-id=3 op=UNLOAD Sep 13 01:33:06.004000 audit: BPF prog-id=8 op=UNLOAD Sep 13 01:33:06.004000 audit: BPF prog-id=7 op=UNLOAD Sep 13 01:33:06.024730 systemd-journald[276]: Journal stopped Sep 13 01:33:23.635861 systemd-journald[276]: Received SIGTERM from PID 1 (systemd). Sep 13 01:33:23.635882 kernel: SELinux: Class mctp_socket not defined in policy. Sep 13 01:33:23.635893 kernel: SELinux: Class anon_inode not defined in policy. Sep 13 01:33:23.635904 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 13 01:33:23.635911 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 01:33:23.635920 kernel: SELinux: policy capability open_perms=1 Sep 13 01:33:23.635928 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 01:33:23.635936 kernel: SELinux: policy capability always_check_network=0 Sep 13 01:33:23.635946 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 01:33:23.635954 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 01:33:23.635963 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 01:33:23.635971 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 01:33:23.635979 kernel: kauditd_printk_skb: 43 callbacks suppressed Sep 13 01:33:23.635988 kernel: audit: type=1403 audit(1757727189.849:87): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 01:33:23.635998 systemd[1]: Successfully loaded SELinux policy in 326.163ms. Sep 13 01:33:23.636009 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 24.233ms. Sep 13 01:33:23.636020 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 01:33:23.636030 systemd[1]: Detected virtualization microsoft. Sep 13 01:33:23.636039 systemd[1]: Detected architecture arm64. Sep 13 01:33:23.636048 systemd[1]: Detected first boot. Sep 13 01:33:23.636057 systemd[1]: Hostname set to . Sep 13 01:33:23.636066 systemd[1]: Initializing machine ID from random generator. Sep 13 01:33:23.636076 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 13 01:33:23.636086 kernel: audit: type=1400 audit(1757727192.269:88): avc: denied { associate } for pid=1120 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Sep 13 01:33:23.636096 kernel: audit: type=1300 audit(1757727192.269:88): arch=c00000b7 syscall=5 success=yes exit=0 a0=40000225f4 a1=40000287f8 a2=40000266c0 a3=32 items=0 ppid=1103 pid=1120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:33:23.636106 kernel: audit: type=1327 audit(1757727192.269:88): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 13 01:33:23.636115 kernel: audit: type=1400 audit(1757727192.284:89): avc: denied { associate } for pid=1120 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Sep 13 01:33:23.636125 kernel: audit: type=1300 audit(1757727192.284:89): arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40000226c9 a2=1ed a3=0 items=2 ppid=1103 pid=1120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:33:23.636134 kernel: audit: type=1307 audit(1757727192.284:89): cwd="/" Sep 13 01:33:23.636143 kernel: audit: type=1302 audit(1757727192.284:89): item=0 name=(null) inode=2 dev=00:2a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:33:23.636152 kernel: audit: type=1302 audit(1757727192.284:89): item=1 name=(null) inode=3 dev=00:2a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:33:23.636161 kernel: audit: type=1327 audit(1757727192.284:89): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 13 01:33:23.636170 systemd[1]: Populated /etc with preset unit settings. Sep 13 01:33:23.636179 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 01:33:23.636190 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 01:33:23.636201 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 01:33:23.636210 systemd[1]: Queued start job for default target multi-user.target. Sep 13 01:33:23.636219 systemd[1]: Unnecessary job was removed for dev-sda6.device. Sep 13 01:33:23.636229 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 13 01:33:23.636238 systemd[1]: Created slice system-addon\x2drun.slice. Sep 13 01:33:23.636250 systemd[1]: Created slice system-getty.slice. Sep 13 01:33:23.636260 systemd[1]: Created slice system-modprobe.slice. Sep 13 01:33:23.636270 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 13 01:33:23.636279 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 13 01:33:23.636288 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 13 01:33:23.636298 systemd[1]: Created slice user.slice. Sep 13 01:33:23.636307 systemd[1]: Started systemd-ask-password-console.path. Sep 13 01:33:23.636316 systemd[1]: Started systemd-ask-password-wall.path. Sep 13 01:33:23.636326 systemd[1]: Set up automount boot.automount. Sep 13 01:33:23.636335 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 13 01:33:23.636345 systemd[1]: Reached target integritysetup.target. Sep 13 01:33:23.636354 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 01:33:23.636364 systemd[1]: Reached target remote-fs.target. Sep 13 01:33:23.636373 systemd[1]: Reached target slices.target. Sep 13 01:33:23.636383 systemd[1]: Reached target swap.target. Sep 13 01:33:23.636392 systemd[1]: Reached target torcx.target. Sep 13 01:33:23.636401 systemd[1]: Reached target veritysetup.target. Sep 13 01:33:23.636412 systemd[1]: Listening on systemd-coredump.socket. Sep 13 01:33:23.636422 systemd[1]: Listening on systemd-initctl.socket. Sep 13 01:33:23.636431 systemd[1]: Listening on systemd-journald-audit.socket. Sep 13 01:33:23.636441 kernel: audit: type=1400 audit(1757727203.176:90): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 01:33:23.636450 kernel: audit: type=1335 audit(1757727203.176:91): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Sep 13 01:33:23.636459 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 13 01:33:23.636481 systemd[1]: Listening on systemd-journald.socket. Sep 13 01:33:23.636490 systemd[1]: Listening on systemd-networkd.socket. Sep 13 01:33:23.636500 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 01:33:23.636511 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 01:33:23.636520 systemd[1]: Listening on systemd-userdbd.socket. Sep 13 01:33:23.636530 systemd[1]: Mounting dev-hugepages.mount... Sep 13 01:33:23.636539 systemd[1]: Mounting dev-mqueue.mount... Sep 13 01:33:23.636550 systemd[1]: Mounting media.mount... Sep 13 01:33:23.636560 systemd[1]: Mounting sys-kernel-debug.mount... Sep 13 01:33:23.636569 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 13 01:33:23.636579 systemd[1]: Mounting tmp.mount... Sep 13 01:33:23.636588 systemd[1]: Starting flatcar-tmpfiles.service... Sep 13 01:33:23.636598 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 01:33:23.636607 systemd[1]: Starting kmod-static-nodes.service... Sep 13 01:33:23.636617 systemd[1]: Starting modprobe@configfs.service... Sep 13 01:33:23.636626 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 01:33:23.636637 systemd[1]: Starting modprobe@drm.service... Sep 13 01:33:23.636647 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 01:33:23.636656 systemd[1]: Starting modprobe@fuse.service... Sep 13 01:33:23.636665 systemd[1]: Starting modprobe@loop.service... Sep 13 01:33:23.636675 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 01:33:23.636685 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 13 01:33:23.636694 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Sep 13 01:33:23.636704 systemd[1]: Starting systemd-journald.service... Sep 13 01:33:23.636713 systemd[1]: Starting systemd-modules-load.service... Sep 13 01:33:23.636724 systemd[1]: Starting systemd-network-generator.service... Sep 13 01:33:23.636733 systemd[1]: Starting systemd-remount-fs.service... Sep 13 01:33:23.636743 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 01:33:23.636752 systemd[1]: Mounted dev-hugepages.mount. Sep 13 01:33:23.636762 systemd[1]: Mounted dev-mqueue.mount. Sep 13 01:33:23.636771 kernel: loop: module loaded Sep 13 01:33:23.636780 systemd[1]: Mounted media.mount. Sep 13 01:33:23.636789 kernel: fuse: init (API version 7.34) Sep 13 01:33:23.636798 kernel: audit: type=1305 audit(1757727203.621:92): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 13 01:33:23.636808 systemd[1]: Mounted sys-kernel-debug.mount. Sep 13 01:33:23.636821 systemd-journald[1214]: Journal started Sep 13 01:33:23.636861 systemd-journald[1214]: Runtime Journal (/run/log/journal/29e3b49073cd4e54b22a47f5bd71c6a5) is 8.0M, max 78.5M, 70.5M free. Sep 13 01:33:23.176000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 01:33:23.176000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Sep 13 01:33:23.621000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 13 01:33:23.639158 kernel: audit: type=1300 audit(1757727203.621:92): arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffd9668bc0 a2=4000 a3=1 items=0 ppid=1 pid=1214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:33:23.621000 audit[1214]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffd9668bc0 a2=4000 a3=1 items=0 ppid=1 pid=1214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:33:23.669317 kernel: audit: type=1327 audit(1757727203.621:92): proctitle="/usr/lib/systemd/systemd-journald" Sep 13 01:33:23.621000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 13 01:33:23.681046 systemd[1]: Started systemd-journald.service. Sep 13 01:33:23.686562 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 13 01:33:23.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:23.707245 systemd[1]: Mounted tmp.mount. Sep 13 01:33:23.711142 kernel: audit: type=1130 audit(1757727203.685:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:23.711817 systemd[1]: Finished flatcar-tmpfiles.service. Sep 13 01:33:23.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:23.716755 systemd[1]: Finished kmod-static-nodes.service. Sep 13 01:33:23.738704 kernel: audit: type=1130 audit(1757727203.715:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:23.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:23.739497 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 01:33:23.739746 systemd[1]: Finished modprobe@configfs.service. Sep 13 01:33:23.760750 kernel: audit: type=1130 audit(1757727203.738:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:23.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:23.761543 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 01:33:23.761780 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 01:33:23.783101 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 01:33:23.760000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:23.795669 systemd[1]: Finished modprobe@drm.service. Sep 13 01:33:23.800966 kernel: audit: type=1130 audit(1757727203.760:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:23.805877 kernel: audit: type=1131 audit(1757727203.760:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:23.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:23.782000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:23.805759 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 01:33:23.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:23.805000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:23.806216 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 01:33:23.812207 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 01:33:23.812431 systemd[1]: Finished modprobe@fuse.service. Sep 13 01:33:23.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:23.811000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:23.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:23.816000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:23.817127 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 01:33:23.817423 systemd[1]: Finished modprobe@loop.service. Sep 13 01:33:23.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:23.821000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:23.822801 systemd[1]: Finished systemd-modules-load.service. Sep 13 01:33:23.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:23.831592 systemd[1]: Finished systemd-network-generator.service. Sep 13 01:33:23.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:23.837413 systemd[1]: Finished systemd-remount-fs.service. Sep 13 01:33:23.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:23.842624 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 01:33:23.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:23.847997 systemd[1]: Reached target network-pre.target. Sep 13 01:33:23.853853 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 13 01:33:23.859727 systemd[1]: Mounting sys-kernel-config.mount... Sep 13 01:33:23.863866 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 01:33:23.894831 systemd[1]: Starting systemd-hwdb-update.service... Sep 13 01:33:23.900413 systemd[1]: Starting systemd-journal-flush.service... Sep 13 01:33:23.905114 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 01:33:23.906394 systemd[1]: Starting systemd-random-seed.service... Sep 13 01:33:23.910657 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 01:33:23.911875 systemd[1]: Starting systemd-sysctl.service... Sep 13 01:33:23.916931 systemd[1]: Starting systemd-sysusers.service... Sep 13 01:33:23.921989 systemd[1]: Starting systemd-udev-settle.service... Sep 13 01:33:23.928071 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 13 01:33:23.933204 systemd[1]: Mounted sys-kernel-config.mount. Sep 13 01:33:23.939083 udevadm[1272]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 13 01:33:23.987318 systemd-journald[1214]: Time spent on flushing to /var/log/journal/29e3b49073cd4e54b22a47f5bd71c6a5 is 13.128ms for 1035 entries. Sep 13 01:33:23.987318 systemd-journald[1214]: System Journal (/var/log/journal/29e3b49073cd4e54b22a47f5bd71c6a5) is 8.0M, max 2.6G, 2.6G free. Sep 13 01:33:24.081575 systemd-journald[1214]: Received client request to flush runtime journal. Sep 13 01:33:24.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:23.995834 systemd[1]: Finished systemd-random-seed.service. Sep 13 01:33:24.001049 systemd[1]: Reached target first-boot-complete.target. Sep 13 01:33:24.082708 systemd[1]: Finished systemd-journal-flush.service. Sep 13 01:33:24.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:24.136600 systemd[1]: Finished systemd-sysctl.service. Sep 13 01:33:24.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:24.846896 systemd[1]: Finished systemd-sysusers.service. Sep 13 01:33:24.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:24.852856 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 13 01:33:25.710553 systemd[1]: Finished systemd-hwdb-update.service. Sep 13 01:33:25.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:25.830894 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 13 01:33:25.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:25.837861 systemd[1]: Starting systemd-udevd.service... Sep 13 01:33:25.857275 systemd-udevd[1283]: Using default interface naming scheme 'v252'. Sep 13 01:33:26.997717 systemd[1]: Started systemd-udevd.service. Sep 13 01:33:27.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:27.009692 systemd[1]: Starting systemd-networkd.service... Sep 13 01:33:27.033062 systemd[1]: Found device dev-ttyAMA0.device. Sep 13 01:33:27.096560 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 01:33:27.105000 audit[1293]: AVC avc: denied { confidentiality } for pid=1293 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Sep 13 01:33:27.116493 kernel: hv_vmbus: registering driver hv_balloon Sep 13 01:33:27.116591 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Sep 13 01:33:27.125368 kernel: hv_balloon: Memory hot add disabled on ARM64 Sep 13 01:33:27.105000 audit[1293]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=aaab19cae070 a1=aa2c a2=ffff853724b0 a3=aaab19c0b010 items=12 ppid=1283 pid=1293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:33:27.105000 audit: CWD cwd="/" Sep 13 01:33:27.105000 audit: PATH item=0 name=(null) inode=6853 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:33:27.105000 audit: PATH item=1 name=(null) inode=10833 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:33:27.105000 audit: PATH item=2 name=(null) inode=10833 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:33:27.105000 audit: PATH item=3 name=(null) inode=10834 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:33:27.105000 audit: PATH item=4 name=(null) inode=10833 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:33:27.105000 audit: PATH item=5 name=(null) inode=10835 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:33:27.105000 audit: PATH item=6 name=(null) inode=10833 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:33:27.105000 audit: PATH item=7 name=(null) inode=10836 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:33:27.105000 audit: PATH item=8 name=(null) inode=10833 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:33:27.105000 audit: PATH item=9 name=(null) inode=10837 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:33:27.105000 audit: PATH item=10 name=(null) inode=10833 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:33:27.105000 audit: PATH item=11 name=(null) inode=10838 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:33:27.105000 audit: PROCTITLE proctitle="(udev-worker)" Sep 13 01:33:27.132745 systemd[1]: Starting systemd-userdbd.service... Sep 13 01:33:27.142517 kernel: hv_vmbus: registering driver hyperv_fb Sep 13 01:33:27.155098 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Sep 13 01:33:27.155177 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Sep 13 01:33:27.163556 kernel: Console: switching to colour dummy device 80x25 Sep 13 01:33:27.167506 kernel: Console: switching to colour frame buffer device 128x48 Sep 13 01:33:27.185801 kernel: hv_utils: Registering HyperV Utility Driver Sep 13 01:33:27.185923 kernel: hv_vmbus: registering driver hv_utils Sep 13 01:33:27.194492 kernel: hv_utils: Heartbeat IC version 3.0 Sep 13 01:33:27.194589 kernel: hv_utils: Shutdown IC version 3.2 Sep 13 01:33:27.194619 kernel: hv_utils: TimeSync IC version 4.0 Sep 13 01:33:27.336308 systemd[1]: Started systemd-userdbd.service. Sep 13 01:33:27.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:27.570491 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 01:33:27.576168 systemd[1]: Finished systemd-udev-settle.service. Sep 13 01:33:27.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:27.582122 systemd[1]: Starting lvm2-activation-early.service... Sep 13 01:33:27.935484 lvm[1360]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 01:33:28.017137 systemd[1]: Finished lvm2-activation-early.service. Sep 13 01:33:28.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:28.022519 systemd[1]: Reached target cryptsetup.target. Sep 13 01:33:28.028224 systemd[1]: Starting lvm2-activation.service... Sep 13 01:33:28.032384 lvm[1362]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 01:33:28.052284 systemd[1]: Finished lvm2-activation.service. Sep 13 01:33:28.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:28.058129 systemd[1]: Reached target local-fs-pre.target. Sep 13 01:33:28.065035 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 01:33:28.065062 systemd[1]: Reached target local-fs.target. Sep 13 01:33:28.069576 systemd[1]: Reached target machines.target. Sep 13 01:33:28.075925 systemd[1]: Starting ldconfig.service... Sep 13 01:33:28.098370 systemd-networkd[1304]: lo: Link UP Sep 13 01:33:28.098381 systemd-networkd[1304]: lo: Gained carrier Sep 13 01:33:28.098796 systemd-networkd[1304]: Enumeration completed Sep 13 01:33:28.110124 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 01:33:28.110211 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 01:33:28.111566 systemd[1]: Starting systemd-boot-update.service... Sep 13 01:33:28.117257 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 13 01:33:28.124719 systemd[1]: Starting systemd-machine-id-commit.service... Sep 13 01:33:28.130505 systemd[1]: Starting systemd-sysext.service... Sep 13 01:33:28.134656 systemd[1]: Started systemd-networkd.service. Sep 13 01:33:28.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:28.140487 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 13 01:33:28.145655 systemd-networkd[1304]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 01:33:28.178434 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1365 (bootctl) Sep 13 01:33:28.179809 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 13 01:33:28.199760 kernel: mlx5_core a098:00:02.0 enP41112s1: Link up Sep 13 01:33:28.200136 kernel: buffer_size[0]=0 is not enough for lossless buffer Sep 13 01:33:28.227212 kernel: hv_netvsc 0022487a-6966-0022-487a-69660022487a eth0: Data path switched to VF: enP41112s1 Sep 13 01:33:28.230513 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 13 01:33:28.231563 systemd-networkd[1304]: enP41112s1: Link UP Sep 13 01:33:28.231926 systemd-networkd[1304]: eth0: Link UP Sep 13 01:33:28.232862 systemd-networkd[1304]: eth0: Gained carrier Sep 13 01:33:28.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:28.240496 systemd-networkd[1304]: enP41112s1: Gained carrier Sep 13 01:33:28.240899 systemd[1]: Unmounting usr-share-oem.mount... Sep 13 01:33:28.248351 systemd-networkd[1304]: eth0: DHCPv4 address 10.200.20.24/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 13 01:33:28.249457 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 13 01:33:28.249727 systemd[1]: Unmounted usr-share-oem.mount. Sep 13 01:33:28.350213 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 01:33:28.350921 systemd[1]: Finished systemd-machine-id-commit.service. Sep 13 01:33:28.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:28.359882 kernel: kauditd_printk_skb: 43 callbacks suppressed Sep 13 01:33:28.359956 kernel: audit: type=1130 audit(1757727208.354:126): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:28.386208 kernel: loop0: detected capacity change from 0 to 203944 Sep 13 01:33:28.449209 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 01:33:28.473260 kernel: loop1: detected capacity change from 0 to 203944 Sep 13 01:33:28.492484 (sd-sysext)[1382]: Using extensions 'kubernetes'. Sep 13 01:33:28.493776 (sd-sysext)[1382]: Merged extensions into '/usr'. Sep 13 01:33:28.510470 systemd[1]: Mounting usr-share-oem.mount... Sep 13 01:33:28.515114 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 01:33:28.516718 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 01:33:28.523732 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 01:33:28.531574 systemd[1]: Starting modprobe@loop.service... Sep 13 01:33:28.537108 systemd-fsck[1378]: fsck.fat 4.2 (2021-01-31) Sep 13 01:33:28.537108 systemd-fsck[1378]: /dev/sda1: 236 files, 117310/258078 clusters Sep 13 01:33:28.539939 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 01:33:28.540515 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 01:33:28.547810 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 13 01:33:28.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:28.555920 systemd[1]: Mounted usr-share-oem.mount. Sep 13 01:33:28.577052 kernel: audit: type=1130 audit(1757727208.554:127): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:28.577915 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 01:33:28.578221 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 01:33:28.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:28.583365 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 01:33:28.583641 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 01:33:28.581000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:28.618744 kernel: audit: type=1130 audit(1757727208.581:128): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:28.618818 kernel: audit: type=1131 audit(1757727208.581:129): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:28.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:28.619712 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 01:33:28.620008 systemd[1]: Finished modprobe@loop.service. Sep 13 01:33:28.637082 kernel: audit: type=1130 audit(1757727208.618:130): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:28.618000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:28.658639 kernel: audit: type=1131 audit(1757727208.618:131): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:28.661608 systemd[1]: Finished systemd-sysext.service. Sep 13 01:33:28.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:28.682684 kernel: audit: type=1130 audit(1757727208.656:132): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:28.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:28.684273 systemd[1]: Mounting boot.mount... Sep 13 01:33:28.699669 kernel: audit: type=1131 audit(1757727208.656:133): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:28.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:28.719869 kernel: audit: type=1130 audit(1757727208.680:134): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:28.720300 systemd[1]: Starting ensure-sysext.service... Sep 13 01:33:28.724588 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 01:33:28.724760 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 01:33:28.726115 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 13 01:33:28.734827 systemd[1]: Reloading. Sep 13 01:33:28.754649 systemd-tmpfiles[1402]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 13 01:33:28.782316 /usr/lib/systemd/system-generators/torcx-generator[1422]: time="2025-09-13T01:33:28Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 01:33:28.782675 /usr/lib/systemd/system-generators/torcx-generator[1422]: time="2025-09-13T01:33:28Z" level=info msg="torcx already run" Sep 13 01:33:28.793521 systemd-tmpfiles[1402]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 01:33:28.808791 systemd-tmpfiles[1402]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 01:33:28.873501 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 01:33:28.873519 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 01:33:28.889012 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 01:33:28.969573 systemd[1]: Mounted boot.mount. Sep 13 01:33:28.982401 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 01:33:28.983844 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 01:33:28.990791 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 01:33:28.998541 systemd[1]: Starting modprobe@loop.service... Sep 13 01:33:29.002847 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 01:33:29.002988 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 01:33:29.004151 systemd[1]: Finished systemd-boot-update.service. Sep 13 01:33:29.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:29.009819 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 01:33:29.010107 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 01:33:29.031138 kernel: audit: type=1130 audit(1757727209.008:135): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:29.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:29.030000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:29.032457 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 01:33:29.032708 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 01:33:29.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:29.036000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:29.038158 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 01:33:29.038457 systemd[1]: Finished modprobe@loop.service. Sep 13 01:33:29.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:29.042000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:29.045450 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 01:33:29.047018 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 01:33:29.052866 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 01:33:29.059266 systemd[1]: Starting modprobe@loop.service... Sep 13 01:33:29.063995 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 01:33:29.064291 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 01:33:29.065254 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 01:33:29.065644 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 01:33:29.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:29.069000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:29.071415 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 01:33:29.071672 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 01:33:29.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:29.076000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:29.077474 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 01:33:29.077742 systemd[1]: Finished modprobe@loop.service. Sep 13 01:33:29.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:29.082000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:29.086423 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 01:33:29.087889 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 01:33:29.093892 systemd[1]: Starting modprobe@drm.service... Sep 13 01:33:29.099994 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 01:33:29.105845 systemd[1]: Starting modprobe@loop.service... Sep 13 01:33:29.110058 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 01:33:29.110326 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 01:33:29.111491 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 01:33:29.111762 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 01:33:29.115000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:29.115000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:29.117428 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 01:33:29.117676 systemd[1]: Finished modprobe@drm.service. Sep 13 01:33:29.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:29.121000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:29.122572 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 01:33:29.122823 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 01:33:29.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:29.126000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:29.128528 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 01:33:29.128795 systemd[1]: Finished modprobe@loop.service. Sep 13 01:33:29.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:29.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:29.135134 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 01:33:29.135318 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 01:33:29.136541 systemd[1]: Finished ensure-sysext.service. Sep 13 01:33:29.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:29.286291 systemd-networkd[1304]: eth0: Gained IPv6LL Sep 13 01:33:29.291279 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 13 01:33:29.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:32.204938 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 13 01:33:32.209000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:32.212303 systemd[1]: Starting audit-rules.service... Sep 13 01:33:32.217720 systemd[1]: Starting clean-ca-certificates.service... Sep 13 01:33:32.224160 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 13 01:33:32.231287 systemd[1]: Starting systemd-resolved.service... Sep 13 01:33:32.237405 systemd[1]: Starting systemd-timesyncd.service... Sep 13 01:33:32.243476 systemd[1]: Starting systemd-update-utmp.service... Sep 13 01:33:32.248459 systemd[1]: Finished clean-ca-certificates.service. Sep 13 01:33:32.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:32.253708 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 01:33:32.308000 audit[1524]: SYSTEM_BOOT pid=1524 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 13 01:33:32.311009 systemd[1]: Finished systemd-update-utmp.service. Sep 13 01:33:32.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:32.365463 systemd[1]: Started systemd-timesyncd.service. Sep 13 01:33:32.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:32.371109 systemd[1]: Reached target time-set.target. Sep 13 01:33:32.433444 systemd-resolved[1521]: Positive Trust Anchors: Sep 13 01:33:32.433772 systemd-resolved[1521]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 01:33:32.433843 systemd-resolved[1521]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 01:33:32.572688 systemd-resolved[1521]: Using system hostname 'ci-3510.3.8-n-9d226ffbbf'. Sep 13 01:33:32.574197 systemd[1]: Started systemd-resolved.service. Sep 13 01:33:32.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:32.578942 systemd[1]: Reached target network.target. Sep 13 01:33:32.583441 systemd[1]: Reached target network-online.target. Sep 13 01:33:32.588768 systemd[1]: Reached target nss-lookup.target. Sep 13 01:33:32.621383 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 13 01:33:32.625638 systemd-timesyncd[1522]: Contacted time server 23.186.168.128:123 (0.flatcar.pool.ntp.org). Sep 13 01:33:32.626005 systemd-timesyncd[1522]: Initial clock synchronization to Sat 2025-09-13 01:33:32.624202 UTC. Sep 13 01:33:32.625000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:32.691000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 13 01:33:32.691000 audit[1541]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffffc6065f0 a2=420 a3=0 items=0 ppid=1517 pid=1541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:33:32.691000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 13 01:33:32.693362 augenrules[1541]: No rules Sep 13 01:33:32.694280 systemd[1]: Finished audit-rules.service. Sep 13 01:33:40.328257 ldconfig[1364]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 01:33:40.345143 systemd[1]: Finished ldconfig.service. Sep 13 01:33:40.351116 systemd[1]: Starting systemd-update-done.service... Sep 13 01:33:40.404334 systemd[1]: Finished systemd-update-done.service. Sep 13 01:33:40.409712 systemd[1]: Reached target sysinit.target. Sep 13 01:33:40.415309 systemd[1]: Started motdgen.path. Sep 13 01:33:40.419693 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 13 01:33:40.425821 systemd[1]: Started logrotate.timer. Sep 13 01:33:40.429761 systemd[1]: Started mdadm.timer. Sep 13 01:33:40.433239 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 13 01:33:40.437772 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 01:33:40.437803 systemd[1]: Reached target paths.target. Sep 13 01:33:40.441946 systemd[1]: Reached target timers.target. Sep 13 01:33:40.446377 systemd[1]: Listening on dbus.socket. Sep 13 01:33:40.451353 systemd[1]: Starting docker.socket... Sep 13 01:33:40.487302 systemd[1]: Listening on sshd.socket. Sep 13 01:33:40.491333 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 01:33:40.491753 systemd[1]: Listening on docker.socket. Sep 13 01:33:40.495864 systemd[1]: Reached target sockets.target. Sep 13 01:33:40.499799 systemd[1]: Reached target basic.target. Sep 13 01:33:40.503944 systemd[1]: System is tainted: cgroupsv1 Sep 13 01:33:40.503993 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 01:33:40.504014 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 01:33:40.505205 systemd[1]: Starting containerd.service... Sep 13 01:33:40.509713 systemd[1]: Starting dbus.service... Sep 13 01:33:40.513783 systemd[1]: Starting enable-oem-cloudinit.service... Sep 13 01:33:40.519077 systemd[1]: Starting extend-filesystems.service... Sep 13 01:33:40.523115 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 13 01:33:40.540093 systemd[1]: Starting kubelet.service... Sep 13 01:33:40.544615 systemd[1]: Starting motdgen.service... Sep 13 01:33:40.549132 systemd[1]: Started nvidia.service. Sep 13 01:33:40.569767 systemd[1]: Starting prepare-helm.service... Sep 13 01:33:40.574601 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 13 01:33:40.579959 systemd[1]: Starting sshd-keygen.service... Sep 13 01:33:40.585595 systemd[1]: Starting systemd-logind.service... Sep 13 01:33:40.591306 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 01:33:40.591371 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 01:33:40.592537 systemd[1]: Starting update-engine.service... Sep 13 01:33:40.597878 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 13 01:33:40.607023 jq[1555]: false Sep 13 01:33:40.610418 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 01:33:40.610681 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 13 01:33:40.621161 jq[1572]: true Sep 13 01:33:40.623339 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 01:33:40.623581 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 13 01:33:40.641868 extend-filesystems[1556]: Found loop1 Sep 13 01:33:40.647158 extend-filesystems[1556]: Found sda Sep 13 01:33:40.647158 extend-filesystems[1556]: Found sda1 Sep 13 01:33:40.647158 extend-filesystems[1556]: Found sda2 Sep 13 01:33:40.647158 extend-filesystems[1556]: Found sda3 Sep 13 01:33:40.647158 extend-filesystems[1556]: Found usr Sep 13 01:33:40.647158 extend-filesystems[1556]: Found sda4 Sep 13 01:33:40.647158 extend-filesystems[1556]: Found sda6 Sep 13 01:33:40.647158 extend-filesystems[1556]: Found sda7 Sep 13 01:33:40.647158 extend-filesystems[1556]: Found sda9 Sep 13 01:33:40.647158 extend-filesystems[1556]: Checking size of /dev/sda9 Sep 13 01:33:40.710647 jq[1582]: true Sep 13 01:33:40.654530 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 01:33:40.654787 systemd[1]: Finished motdgen.service. Sep 13 01:33:40.725810 env[1589]: time="2025-09-13T01:33:40.725762587Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 13 01:33:40.760107 env[1589]: time="2025-09-13T01:33:40.760042820Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 01:33:40.760922 env[1589]: time="2025-09-13T01:33:40.760897935Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 01:33:40.763894 env[1589]: time="2025-09-13T01:33:40.763860499Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 01:33:40.763955 env[1589]: time="2025-09-13T01:33:40.763893897Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 01:33:40.764173 env[1589]: time="2025-09-13T01:33:40.764144884Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 01:33:40.764173 env[1589]: time="2025-09-13T01:33:40.764170123Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 01:33:40.764238 env[1589]: time="2025-09-13T01:33:40.764197761Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 13 01:33:40.764238 env[1589]: time="2025-09-13T01:33:40.764209001Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 01:33:40.764308 env[1589]: time="2025-09-13T01:33:40.764287677Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 01:33:40.764553 env[1589]: time="2025-09-13T01:33:40.764530944Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 01:33:40.764705 env[1589]: time="2025-09-13T01:33:40.764681096Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 01:33:40.764705 env[1589]: time="2025-09-13T01:33:40.764704215Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 01:33:40.764776 env[1589]: time="2025-09-13T01:33:40.764755732Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 13 01:33:40.764776 env[1589]: time="2025-09-13T01:33:40.764772891Z" level=info msg="metadata content store policy set" policy=shared Sep 13 01:33:40.774621 systemd-logind[1569]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Sep 13 01:33:40.776097 systemd-logind[1569]: New seat seat0. Sep 13 01:33:40.782286 env[1589]: time="2025-09-13T01:33:40.782244770Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 01:33:40.782390 env[1589]: time="2025-09-13T01:33:40.782296888Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 01:33:40.782390 env[1589]: time="2025-09-13T01:33:40.782314047Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 01:33:40.782390 env[1589]: time="2025-09-13T01:33:40.782347885Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 01:33:40.782390 env[1589]: time="2025-09-13T01:33:40.782362444Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 01:33:40.782390 env[1589]: time="2025-09-13T01:33:40.782376363Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 01:33:40.782390 env[1589]: time="2025-09-13T01:33:40.782388643Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 01:33:40.782785 env[1589]: time="2025-09-13T01:33:40.782760583Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 01:33:40.782819 env[1589]: time="2025-09-13T01:33:40.782789342Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 13 01:33:40.782819 env[1589]: time="2025-09-13T01:33:40.782803101Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 01:33:40.782819 env[1589]: time="2025-09-13T01:33:40.782815860Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 01:33:40.782881 env[1589]: time="2025-09-13T01:33:40.782829939Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 01:33:40.782977 env[1589]: time="2025-09-13T01:33:40.782954933Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 01:33:40.783052 env[1589]: time="2025-09-13T01:33:40.783033569Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 01:33:40.783392 env[1589]: time="2025-09-13T01:33:40.783365671Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 01:33:40.783493 env[1589]: time="2025-09-13T01:33:40.783396990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 01:33:40.783493 env[1589]: time="2025-09-13T01:33:40.783411069Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 01:33:40.783493 env[1589]: time="2025-09-13T01:33:40.783451267Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 01:33:40.783493 env[1589]: time="2025-09-13T01:33:40.783464866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 01:33:40.783493 env[1589]: time="2025-09-13T01:33:40.783476545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 01:33:40.783493 env[1589]: time="2025-09-13T01:33:40.783488145Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 01:33:40.783609 env[1589]: time="2025-09-13T01:33:40.783500744Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 01:33:40.783609 env[1589]: time="2025-09-13T01:33:40.783512983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 01:33:40.783609 env[1589]: time="2025-09-13T01:33:40.783525343Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 01:33:40.783609 env[1589]: time="2025-09-13T01:33:40.783537222Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 01:33:40.783609 env[1589]: time="2025-09-13T01:33:40.783550581Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 01:33:40.783817 env[1589]: time="2025-09-13T01:33:40.783664735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 01:33:40.783817 env[1589]: time="2025-09-13T01:33:40.783683134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 01:33:40.783817 env[1589]: time="2025-09-13T01:33:40.783695894Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 01:33:40.783817 env[1589]: time="2025-09-13T01:33:40.783730812Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 01:33:40.783817 env[1589]: time="2025-09-13T01:33:40.783746171Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 13 01:33:40.783817 env[1589]: time="2025-09-13T01:33:40.783757851Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 01:33:40.783817 env[1589]: time="2025-09-13T01:33:40.783775250Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 13 01:33:40.783972 env[1589]: time="2025-09-13T01:33:40.783925722Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 01:33:40.784191 env[1589]: time="2025-09-13T01:33:40.784123071Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 01:33:40.784191 env[1589]: time="2025-09-13T01:33:40.784201507Z" level=info msg="Connect containerd service" Sep 13 01:33:40.890997 extend-filesystems[1556]: Old size kept for /dev/sda9 Sep 13 01:33:40.890997 extend-filesystems[1556]: Found sr0 Sep 13 01:33:40.907541 tar[1578]: linux-arm64/helm Sep 13 01:33:40.785277 systemd[1]: Started containerd.service. Sep 13 01:33:40.907811 bash[1607]: Updated "/home/core/.ssh/authorized_keys" Sep 13 01:33:40.907878 env[1589]: time="2025-09-13T01:33:40.784242665Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 01:33:40.907878 env[1589]: time="2025-09-13T01:33:40.784807875Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 01:33:40.907878 env[1589]: time="2025-09-13T01:33:40.785034023Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 01:33:40.907878 env[1589]: time="2025-09-13T01:33:40.785070661Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 01:33:40.907878 env[1589]: time="2025-09-13T01:33:40.791544200Z" level=info msg="Start subscribing containerd event" Sep 13 01:33:40.907878 env[1589]: time="2025-09-13T01:33:40.791617636Z" level=info msg="Start recovering state" Sep 13 01:33:40.907878 env[1589]: time="2025-09-13T01:33:40.791683873Z" level=info msg="Start event monitor" Sep 13 01:33:40.907878 env[1589]: time="2025-09-13T01:33:40.791703512Z" level=info msg="Start snapshots syncer" Sep 13 01:33:40.907878 env[1589]: time="2025-09-13T01:33:40.791713231Z" level=info msg="Start cni network conf syncer for default" Sep 13 01:33:40.907878 env[1589]: time="2025-09-13T01:33:40.791720351Z" level=info msg="Start streaming server" Sep 13 01:33:40.907878 env[1589]: time="2025-09-13T01:33:40.806083114Z" level=info msg="containerd successfully booted in 0.081061s" Sep 13 01:33:40.793228 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 01:33:40.807717 systemd[1]: Finished extend-filesystems.service. Sep 13 01:33:40.838685 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 13 01:33:41.283140 dbus-daemon[1554]: [system] SELinux support is enabled Sep 13 01:33:41.283689 systemd[1]: Started dbus.service. Sep 13 01:33:41.289045 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 01:33:41.289072 systemd[1]: Reached target system-config.target. Sep 13 01:33:41.289848 dbus-daemon[1554]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 13 01:33:41.297299 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 01:33:41.297321 systemd[1]: Reached target user-config.target. Sep 13 01:33:41.304211 systemd[1]: Started systemd-logind.service. Sep 13 01:33:41.405965 update_engine[1571]: I0913 01:33:41.391171 1571 main.cc:92] Flatcar Update Engine starting Sep 13 01:33:41.414378 tar[1578]: linux-arm64/LICENSE Sep 13 01:33:41.414486 tar[1578]: linux-arm64/README.md Sep 13 01:33:41.422608 systemd[1]: Finished prepare-helm.service. Sep 13 01:33:41.499599 systemd[1]: Started update-engine.service. Sep 13 01:33:41.505768 systemd[1]: Started locksmithd.service. Sep 13 01:33:41.510373 update_engine[1571]: I0913 01:33:41.510344 1571 update_check_scheduler.cc:74] Next update check in 6m45s Sep 13 01:33:41.554559 systemd[1]: nvidia.service: Deactivated successfully. Sep 13 01:33:41.664265 systemd[1]: Started kubelet.service. Sep 13 01:33:42.142432 kubelet[1676]: E0913 01:33:42.142393 1676 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 01:33:42.144105 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 01:33:42.144252 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 01:33:42.583003 sshd_keygen[1579]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 01:33:42.600033 systemd[1]: Finished sshd-keygen.service. Sep 13 01:33:42.606060 systemd[1]: Starting issuegen.service... Sep 13 01:33:42.610764 systemd[1]: Started waagent.service. Sep 13 01:33:42.615425 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 01:33:42.615668 systemd[1]: Finished issuegen.service. Sep 13 01:33:42.621153 systemd[1]: Starting systemd-user-sessions.service... Sep 13 01:33:42.679100 systemd[1]: Finished systemd-user-sessions.service. Sep 13 01:33:42.688138 systemd[1]: Started getty@tty1.service. Sep 13 01:33:42.693964 systemd[1]: Started serial-getty@ttyAMA0.service. Sep 13 01:33:42.698966 systemd[1]: Reached target getty.target. Sep 13 01:33:42.703154 systemd[1]: Reached target multi-user.target. Sep 13 01:33:42.709067 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 13 01:33:42.718732 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 13 01:33:42.718988 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 13 01:33:42.724271 systemd[1]: Startup finished in 18.386s (kernel) + 33.354s (userspace) = 51.741s. Sep 13 01:33:43.180989 locksmithd[1667]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 01:33:43.854346 login[1705]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Sep 13 01:33:43.895824 login[1704]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 13 01:33:44.071110 systemd[1]: Created slice user-500.slice. Sep 13 01:33:44.072205 systemd[1]: Starting user-runtime-dir@500.service... Sep 13 01:33:44.074476 systemd-logind[1569]: New session 2 of user core. Sep 13 01:33:44.135206 systemd[1]: Finished user-runtime-dir@500.service. Sep 13 01:33:44.136598 systemd[1]: Starting user@500.service... Sep 13 01:33:44.200664 (systemd)[1711]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:33:44.669483 systemd[1711]: Queued start job for default target default.target. Sep 13 01:33:44.669734 systemd[1711]: Reached target paths.target. Sep 13 01:33:44.669749 systemd[1711]: Reached target sockets.target. Sep 13 01:33:44.669760 systemd[1711]: Reached target timers.target. Sep 13 01:33:44.669770 systemd[1711]: Reached target basic.target. Sep 13 01:33:44.669826 systemd[1711]: Reached target default.target. Sep 13 01:33:44.669851 systemd[1711]: Startup finished in 462ms. Sep 13 01:33:44.669930 systemd[1]: Started user@500.service. Sep 13 01:33:44.670942 systemd[1]: Started session-2.scope. Sep 13 01:33:44.855830 login[1705]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 13 01:33:44.859621 systemd-logind[1569]: New session 1 of user core. Sep 13 01:33:44.860003 systemd[1]: Started session-1.scope. Sep 13 01:33:52.163727 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 01:33:52.163899 systemd[1]: Stopped kubelet.service. Sep 13 01:33:52.165381 systemd[1]: Starting kubelet.service... Sep 13 01:33:52.497511 systemd[1]: Started kubelet.service. Sep 13 01:33:52.542047 kubelet[1743]: E0913 01:33:52.542008 1743 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 01:33:52.544401 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 01:33:52.544540 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 01:33:52.708975 waagent[1699]: 2025-09-13T01:33:52.708871Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Sep 13 01:33:52.750755 waagent[1699]: 2025-09-13T01:33:52.750611Z INFO Daemon Daemon OS: flatcar 3510.3.8 Sep 13 01:33:52.755456 waagent[1699]: 2025-09-13T01:33:52.755378Z INFO Daemon Daemon Python: 3.9.16 Sep 13 01:33:52.760411 waagent[1699]: 2025-09-13T01:33:52.760298Z INFO Daemon Daemon Run daemon Sep 13 01:33:52.764904 waagent[1699]: 2025-09-13T01:33:52.764829Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.8' Sep 13 01:33:52.802235 waagent[1699]: 2025-09-13T01:33:52.802074Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Sep 13 01:33:52.819794 waagent[1699]: 2025-09-13T01:33:52.819660Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Sep 13 01:33:52.830012 waagent[1699]: 2025-09-13T01:33:52.829918Z INFO Daemon Daemon cloud-init is enabled: False Sep 13 01:33:52.835140 waagent[1699]: 2025-09-13T01:33:52.835063Z INFO Daemon Daemon Using waagent for provisioning Sep 13 01:33:52.841238 waagent[1699]: 2025-09-13T01:33:52.841149Z INFO Daemon Daemon Activate resource disk Sep 13 01:33:52.846088 waagent[1699]: 2025-09-13T01:33:52.846023Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Sep 13 01:33:52.860436 waagent[1699]: 2025-09-13T01:33:52.860351Z INFO Daemon Daemon Found device: None Sep 13 01:33:52.865082 waagent[1699]: 2025-09-13T01:33:52.864999Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Sep 13 01:33:52.874244 waagent[1699]: 2025-09-13T01:33:52.874125Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Sep 13 01:33:52.887168 waagent[1699]: 2025-09-13T01:33:52.887096Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 13 01:33:52.893172 waagent[1699]: 2025-09-13T01:33:52.893103Z INFO Daemon Daemon Running default provisioning handler Sep 13 01:33:52.906835 waagent[1699]: 2025-09-13T01:33:52.906704Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Sep 13 01:33:52.921964 waagent[1699]: 2025-09-13T01:33:52.921819Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Sep 13 01:33:52.932305 waagent[1699]: 2025-09-13T01:33:52.932209Z INFO Daemon Daemon cloud-init is enabled: False Sep 13 01:33:52.937978 waagent[1699]: 2025-09-13T01:33:52.937884Z INFO Daemon Daemon Copying ovf-env.xml Sep 13 01:33:53.093096 waagent[1699]: 2025-09-13T01:33:53.092908Z INFO Daemon Daemon Successfully mounted dvd Sep 13 01:33:53.227870 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Sep 13 01:33:53.278118 waagent[1699]: 2025-09-13T01:33:53.277971Z INFO Daemon Daemon Detect protocol endpoint Sep 13 01:33:53.283301 waagent[1699]: 2025-09-13T01:33:53.283212Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 13 01:33:53.289118 waagent[1699]: 2025-09-13T01:33:53.289044Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Sep 13 01:33:53.295948 waagent[1699]: 2025-09-13T01:33:53.295879Z INFO Daemon Daemon Test for route to 168.63.129.16 Sep 13 01:33:53.301470 waagent[1699]: 2025-09-13T01:33:53.301407Z INFO Daemon Daemon Route to 168.63.129.16 exists Sep 13 01:33:53.306485 waagent[1699]: 2025-09-13T01:33:53.306421Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Sep 13 01:33:53.489721 waagent[1699]: 2025-09-13T01:33:53.489653Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Sep 13 01:33:53.497271 waagent[1699]: 2025-09-13T01:33:53.497224Z INFO Daemon Daemon Wire protocol version:2012-11-30 Sep 13 01:33:53.502657 waagent[1699]: 2025-09-13T01:33:53.502586Z INFO Daemon Daemon Server preferred version:2015-04-05 Sep 13 01:33:54.021770 waagent[1699]: 2025-09-13T01:33:54.021614Z INFO Daemon Daemon Initializing goal state during protocol detection Sep 13 01:33:54.037613 waagent[1699]: 2025-09-13T01:33:54.037533Z INFO Daemon Daemon Forcing an update of the goal state.. Sep 13 01:33:54.043915 waagent[1699]: 2025-09-13T01:33:54.043841Z INFO Daemon Daemon Fetching goal state [incarnation 1] Sep 13 01:33:54.124574 waagent[1699]: 2025-09-13T01:33:54.124431Z INFO Daemon Daemon Found private key matching thumbprint 8F5FB8A06CF54A1DD39E887D68C2D10D70DAFA08 Sep 13 01:33:54.133205 waagent[1699]: 2025-09-13T01:33:54.133095Z INFO Daemon Daemon Fetch goal state completed Sep 13 01:33:54.159866 waagent[1699]: 2025-09-13T01:33:54.159810Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 72d45c6e-ede8-4d6f-a0d7-2237a1a16607 New eTag: 15905855715849249596] Sep 13 01:33:54.170599 waagent[1699]: 2025-09-13T01:33:54.170512Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Sep 13 01:33:54.186158 waagent[1699]: 2025-09-13T01:33:54.186095Z INFO Daemon Daemon Starting provisioning Sep 13 01:33:54.191465 waagent[1699]: 2025-09-13T01:33:54.191384Z INFO Daemon Daemon Handle ovf-env.xml. Sep 13 01:33:54.196639 waagent[1699]: 2025-09-13T01:33:54.196560Z INFO Daemon Daemon Set hostname [ci-3510.3.8-n-9d226ffbbf] Sep 13 01:33:54.250546 waagent[1699]: 2025-09-13T01:33:54.250409Z INFO Daemon Daemon Publish hostname [ci-3510.3.8-n-9d226ffbbf] Sep 13 01:33:54.257252 waagent[1699]: 2025-09-13T01:33:54.257144Z INFO Daemon Daemon Examine /proc/net/route for primary interface Sep 13 01:33:54.264229 waagent[1699]: 2025-09-13T01:33:54.264133Z INFO Daemon Daemon Primary interface is [eth0] Sep 13 01:33:54.280610 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Sep 13 01:33:54.280826 systemd[1]: Stopped systemd-networkd-wait-online.service. Sep 13 01:33:54.280877 systemd[1]: Stopping systemd-networkd-wait-online.service... Sep 13 01:33:54.281053 systemd[1]: Stopping systemd-networkd.service... Sep 13 01:33:54.285257 systemd-networkd[1304]: eth0: DHCPv6 lease lost Sep 13 01:33:54.286674 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 01:33:54.286913 systemd[1]: Stopped systemd-networkd.service. Sep 13 01:33:54.288784 systemd[1]: Starting systemd-networkd.service... Sep 13 01:33:54.323337 systemd-networkd[1771]: enP41112s1: Link UP Sep 13 01:33:54.323349 systemd-networkd[1771]: enP41112s1: Gained carrier Sep 13 01:33:54.324625 systemd-networkd[1771]: eth0: Link UP Sep 13 01:33:54.324635 systemd-networkd[1771]: eth0: Gained carrier Sep 13 01:33:54.324992 systemd-networkd[1771]: lo: Link UP Sep 13 01:33:54.325000 systemd-networkd[1771]: lo: Gained carrier Sep 13 01:33:54.325253 systemd-networkd[1771]: eth0: Gained IPv6LL Sep 13 01:33:54.325699 systemd-networkd[1771]: Enumeration completed Sep 13 01:33:54.325828 systemd[1]: Started systemd-networkd.service. Sep 13 01:33:54.327524 waagent[1699]: 2025-09-13T01:33:54.327364Z INFO Daemon Daemon Create user account if not exists Sep 13 01:33:54.327694 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 13 01:33:54.334332 waagent[1699]: 2025-09-13T01:33:54.334240Z INFO Daemon Daemon User core already exists, skip useradd Sep 13 01:33:54.341940 systemd-networkd[1771]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 01:33:54.342231 waagent[1699]: 2025-09-13T01:33:54.341906Z INFO Daemon Daemon Configure sudoer Sep 13 01:33:54.358267 systemd-networkd[1771]: eth0: DHCPv4 address 10.200.20.24/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 13 01:33:54.363277 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 13 01:33:54.364501 waagent[1699]: 2025-09-13T01:33:54.364406Z INFO Daemon Daemon Configure sshd Sep 13 01:33:54.369028 waagent[1699]: 2025-09-13T01:33:54.368935Z INFO Daemon Daemon Deploy ssh public key. Sep 13 01:33:55.554450 waagent[1699]: 2025-09-13T01:33:55.554365Z INFO Daemon Daemon Provisioning complete Sep 13 01:33:55.574928 waagent[1699]: 2025-09-13T01:33:55.574860Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Sep 13 01:33:55.581237 waagent[1699]: 2025-09-13T01:33:55.581144Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Sep 13 01:33:55.591759 waagent[1699]: 2025-09-13T01:33:55.591681Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Sep 13 01:33:55.897096 waagent[1778]: 2025-09-13T01:33:55.896932Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Sep 13 01:33:55.897829 waagent[1778]: 2025-09-13T01:33:55.897768Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 13 01:33:55.897963 waagent[1778]: 2025-09-13T01:33:55.897916Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 13 01:33:55.910851 waagent[1778]: 2025-09-13T01:33:55.910754Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Sep 13 01:33:55.911047 waagent[1778]: 2025-09-13T01:33:55.910994Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Sep 13 01:33:55.974074 waagent[1778]: 2025-09-13T01:33:55.973893Z INFO ExtHandler ExtHandler Found private key matching thumbprint 8F5FB8A06CF54A1DD39E887D68C2D10D70DAFA08 Sep 13 01:33:55.974420 waagent[1778]: 2025-09-13T01:33:55.974365Z INFO ExtHandler ExtHandler Fetch goal state completed Sep 13 01:33:55.988996 waagent[1778]: 2025-09-13T01:33:55.988934Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: b3afdcf3-ecdc-460c-9c7c-2674f6a36529 New eTag: 15905855715849249596] Sep 13 01:33:55.989598 waagent[1778]: 2025-09-13T01:33:55.989535Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Sep 13 01:33:56.122668 waagent[1778]: 2025-09-13T01:33:56.122511Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.8; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Sep 13 01:33:56.147219 waagent[1778]: 2025-09-13T01:33:56.147080Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1778 Sep 13 01:33:56.151012 waagent[1778]: 2025-09-13T01:33:56.150934Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.8', '', 'Flatcar Container Linux by Kinvolk'] Sep 13 01:33:56.152332 waagent[1778]: 2025-09-13T01:33:56.152267Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Sep 13 01:33:56.303820 waagent[1778]: 2025-09-13T01:33:56.303748Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Sep 13 01:33:56.304260 waagent[1778]: 2025-09-13T01:33:56.304198Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Sep 13 01:33:56.312793 waagent[1778]: 2025-09-13T01:33:56.312722Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Sep 13 01:33:56.313412 waagent[1778]: 2025-09-13T01:33:56.313347Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Sep 13 01:33:56.314677 waagent[1778]: 2025-09-13T01:33:56.314605Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Sep 13 01:33:56.316150 waagent[1778]: 2025-09-13T01:33:56.316073Z INFO ExtHandler ExtHandler Starting env monitor service. Sep 13 01:33:56.316718 waagent[1778]: 2025-09-13T01:33:56.316639Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 13 01:33:56.317104 waagent[1778]: 2025-09-13T01:33:56.317047Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 13 01:33:56.317809 waagent[1778]: 2025-09-13T01:33:56.317747Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Sep 13 01:33:56.318275 waagent[1778]: 2025-09-13T01:33:56.318201Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Sep 13 01:33:56.318275 waagent[1778]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Sep 13 01:33:56.318275 waagent[1778]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Sep 13 01:33:56.318275 waagent[1778]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Sep 13 01:33:56.318275 waagent[1778]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Sep 13 01:33:56.318275 waagent[1778]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 13 01:33:56.318275 waagent[1778]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 13 01:33:56.318835 waagent[1778]: 2025-09-13T01:33:56.318760Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Sep 13 01:33:56.319517 waagent[1778]: 2025-09-13T01:33:56.319421Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Sep 13 01:33:56.319659 waagent[1778]: 2025-09-13T01:33:56.319587Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 13 01:33:56.319860 waagent[1778]: 2025-09-13T01:33:56.319807Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Sep 13 01:33:56.322298 waagent[1778]: 2025-09-13T01:33:56.322077Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 13 01:33:56.322979 waagent[1778]: 2025-09-13T01:33:56.322884Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Sep 13 01:33:56.323931 waagent[1778]: 2025-09-13T01:33:56.323831Z INFO EnvHandler ExtHandler Configure routes Sep 13 01:33:56.324372 waagent[1778]: 2025-09-13T01:33:56.324284Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Sep 13 01:33:56.324605 waagent[1778]: 2025-09-13T01:33:56.324527Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Sep 13 01:33:56.324761 waagent[1778]: 2025-09-13T01:33:56.324700Z INFO EnvHandler ExtHandler Gateway:None Sep 13 01:33:56.325401 waagent[1778]: 2025-09-13T01:33:56.325323Z INFO EnvHandler ExtHandler Routes:None Sep 13 01:33:56.341475 waagent[1778]: 2025-09-13T01:33:56.341392Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Sep 13 01:33:56.342140 waagent[1778]: 2025-09-13T01:33:56.342083Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Sep 13 01:33:56.343152 waagent[1778]: 2025-09-13T01:33:56.343083Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Sep 13 01:33:56.383158 waagent[1778]: 2025-09-13T01:33:56.383000Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1771' Sep 13 01:33:56.391042 waagent[1778]: 2025-09-13T01:33:56.390947Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Sep 13 01:33:56.521579 waagent[1778]: 2025-09-13T01:33:56.521441Z INFO MonitorHandler ExtHandler Network interfaces: Sep 13 01:33:56.521579 waagent[1778]: Executing ['ip', '-a', '-o', 'link']: Sep 13 01:33:56.521579 waagent[1778]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Sep 13 01:33:56.521579 waagent[1778]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7a:69:66 brd ff:ff:ff:ff:ff:ff Sep 13 01:33:56.521579 waagent[1778]: 3: enP41112s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7a:69:66 brd ff:ff:ff:ff:ff:ff\ altname enP41112p0s2 Sep 13 01:33:56.521579 waagent[1778]: Executing ['ip', '-4', '-a', '-o', 'address']: Sep 13 01:33:56.521579 waagent[1778]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Sep 13 01:33:56.521579 waagent[1778]: 2: eth0 inet 10.200.20.24/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Sep 13 01:33:56.521579 waagent[1778]: Executing ['ip', '-6', '-a', '-o', 'address']: Sep 13 01:33:56.521579 waagent[1778]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Sep 13 01:33:56.521579 waagent[1778]: 2: eth0 inet6 fe80::222:48ff:fe7a:6966/64 scope link \ valid_lft forever preferred_lft forever Sep 13 01:33:56.856936 waagent[1778]: 2025-09-13T01:33:56.856809Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.14.0.1 -- exiting Sep 13 01:33:57.596384 waagent[1699]: 2025-09-13T01:33:57.596265Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Sep 13 01:33:57.602142 waagent[1699]: 2025-09-13T01:33:57.602085Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.14.0.1 to be the latest agent Sep 13 01:33:58.899047 waagent[1814]: 2025-09-13T01:33:58.898948Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.14.0.1) Sep 13 01:33:58.899770 waagent[1814]: 2025-09-13T01:33:58.899704Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.8 Sep 13 01:33:58.899920 waagent[1814]: 2025-09-13T01:33:58.899871Z INFO ExtHandler ExtHandler Python: 3.9.16 Sep 13 01:33:58.900061 waagent[1814]: 2025-09-13T01:33:58.900016Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Sep 13 01:33:58.913651 waagent[1814]: 2025-09-13T01:33:58.913520Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.8; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; Arch: aarch64; systemd: True; systemd_version: systemd 252 (252); LISDrivers: Absent; logrotate: logrotate 3.20.1; Sep 13 01:33:58.914119 waagent[1814]: 2025-09-13T01:33:58.914064Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 13 01:33:58.914325 waagent[1814]: 2025-09-13T01:33:58.914275Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 13 01:33:58.914564 waagent[1814]: 2025-09-13T01:33:58.914513Z INFO ExtHandler ExtHandler Initializing the goal state... Sep 13 01:33:58.928252 waagent[1814]: 2025-09-13T01:33:58.928158Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 13 01:33:58.937717 waagent[1814]: 2025-09-13T01:33:58.937657Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Sep 13 01:33:58.938829 waagent[1814]: 2025-09-13T01:33:58.938770Z INFO ExtHandler Sep 13 01:33:58.938996 waagent[1814]: 2025-09-13T01:33:58.938949Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: addd7479-e3f3-4914-b293-d73df8d07a4c eTag: 15905855715849249596 source: Fabric] Sep 13 01:33:58.939783 waagent[1814]: 2025-09-13T01:33:58.939727Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Sep 13 01:33:58.941051 waagent[1814]: 2025-09-13T01:33:58.940990Z INFO ExtHandler Sep 13 01:33:58.941219 waagent[1814]: 2025-09-13T01:33:58.941155Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Sep 13 01:33:58.948113 waagent[1814]: 2025-09-13T01:33:58.948064Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Sep 13 01:33:58.948672 waagent[1814]: 2025-09-13T01:33:58.948624Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Sep 13 01:33:58.968752 waagent[1814]: 2025-09-13T01:33:58.968690Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Sep 13 01:33:59.033569 waagent[1814]: 2025-09-13T01:33:59.033433Z INFO ExtHandler Downloaded certificate {'thumbprint': '8F5FB8A06CF54A1DD39E887D68C2D10D70DAFA08', 'hasPrivateKey': True} Sep 13 01:33:59.034987 waagent[1814]: 2025-09-13T01:33:59.034922Z INFO ExtHandler Fetch goal state from WireServer completed Sep 13 01:33:59.035949 waagent[1814]: 2025-09-13T01:33:59.035890Z INFO ExtHandler ExtHandler Goal state initialization completed. Sep 13 01:33:59.057997 waagent[1814]: 2025-09-13T01:33:59.057873Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.0.15 3 Sep 2024 (Library: OpenSSL 3.0.15 3 Sep 2024) Sep 13 01:33:59.067304 waagent[1814]: 2025-09-13T01:33:59.067142Z INFO ExtHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules Sep 13 01:33:59.071656 waagent[1814]: 2025-09-13T01:33:59.071530Z INFO ExtHandler ExtHandler Did not find a legacy firewall rule: ['iptables', '-w', '-t', 'security', '-C', 'OUTPUT', '-d', '168.63.129.16', '-p', 'tcp', '-m', 'conntrack', '--ctstate', 'INVALID,NEW', '-j', 'ACCEPT'] Sep 13 01:33:59.071935 waagent[1814]: 2025-09-13T01:33:59.071877Z INFO ExtHandler ExtHandler Checking state of the firewall Sep 13 01:33:59.162521 waagent[1814]: 2025-09-13T01:33:59.162337Z WARNING ExtHandler ExtHandler The firewall rules for Azure Fabric are not setup correctly (the environment thread will fix it): The following rules are missing: ['ACCEPT DNS', 'DROP'] due to: ['iptables: Bad rule (does a matching rule exist in that chain?).\n', 'iptables: Bad rule (does a matching rule exist in that chain?).\n']. Current state: Sep 13 01:33:59.162521 waagent[1814]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 13 01:33:59.162521 waagent[1814]: pkts bytes target prot opt in out source destination Sep 13 01:33:59.162521 waagent[1814]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 13 01:33:59.162521 waagent[1814]: pkts bytes target prot opt in out source destination Sep 13 01:33:59.162521 waagent[1814]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 13 01:33:59.162521 waagent[1814]: pkts bytes target prot opt in out source destination Sep 13 01:33:59.162521 waagent[1814]: 55 7867 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 13 01:33:59.163659 waagent[1814]: 2025-09-13T01:33:59.163595Z INFO ExtHandler ExtHandler Setting up persistent firewall rules Sep 13 01:33:59.166458 waagent[1814]: 2025-09-13T01:33:59.166334Z INFO ExtHandler ExtHandler The firewalld service is not present on the system Sep 13 01:33:59.166738 waagent[1814]: 2025-09-13T01:33:59.166685Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Sep 13 01:33:59.167129 waagent[1814]: 2025-09-13T01:33:59.167073Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Sep 13 01:33:59.174945 waagent[1814]: 2025-09-13T01:33:59.174874Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Sep 13 01:33:59.175504 waagent[1814]: 2025-09-13T01:33:59.175446Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Sep 13 01:33:59.183795 waagent[1814]: 2025-09-13T01:33:59.183731Z INFO ExtHandler ExtHandler WALinuxAgent-2.14.0.1 running as process 1814 Sep 13 01:33:59.187144 waagent[1814]: 2025-09-13T01:33:59.187080Z INFO ExtHandler ExtHandler [CGI] Cgroups is not currently supported on ['flatcar', '3510.3.8', '', 'Flatcar Container Linux by Kinvolk'] Sep 13 01:33:59.187991 waagent[1814]: 2025-09-13T01:33:59.187932Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case cgroup usage went from enabled to disabled Sep 13 01:33:59.188937 waagent[1814]: 2025-09-13T01:33:59.188878Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Sep 13 01:33:59.191747 waagent[1814]: 2025-09-13T01:33:59.191688Z INFO ExtHandler ExtHandler Signing certificate written to /var/lib/waagent/microsoft_root_certificate.pem Sep 13 01:33:59.192100 waagent[1814]: 2025-09-13T01:33:59.192049Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Sep 13 01:33:59.193504 waagent[1814]: 2025-09-13T01:33:59.193435Z INFO ExtHandler ExtHandler Starting env monitor service. Sep 13 01:33:59.194120 waagent[1814]: 2025-09-13T01:33:59.194062Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 13 01:33:59.194427 waagent[1814]: 2025-09-13T01:33:59.194375Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 13 01:33:59.195079 waagent[1814]: 2025-09-13T01:33:59.195029Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Sep 13 01:33:59.195552 waagent[1814]: 2025-09-13T01:33:59.195497Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Sep 13 01:33:59.195552 waagent[1814]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Sep 13 01:33:59.195552 waagent[1814]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Sep 13 01:33:59.195552 waagent[1814]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Sep 13 01:33:59.195552 waagent[1814]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Sep 13 01:33:59.195552 waagent[1814]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 13 01:33:59.195552 waagent[1814]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 13 01:33:59.198099 waagent[1814]: 2025-09-13T01:33:59.197987Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Sep 13 01:33:59.198702 waagent[1814]: 2025-09-13T01:33:59.198638Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 13 01:33:59.201378 waagent[1814]: 2025-09-13T01:33:59.201223Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 13 01:33:59.202267 waagent[1814]: 2025-09-13T01:33:59.202160Z INFO EnvHandler ExtHandler Configure routes Sep 13 01:33:59.202451 waagent[1814]: 2025-09-13T01:33:59.202401Z INFO EnvHandler ExtHandler Gateway:None Sep 13 01:33:59.202585 waagent[1814]: 2025-09-13T01:33:59.202541Z INFO EnvHandler ExtHandler Routes:None Sep 13 01:33:59.203636 waagent[1814]: 2025-09-13T01:33:59.203565Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Sep 13 01:33:59.203779 waagent[1814]: 2025-09-13T01:33:59.203711Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Sep 13 01:33:59.205043 waagent[1814]: 2025-09-13T01:33:59.204960Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Sep 13 01:33:59.205155 waagent[1814]: 2025-09-13T01:33:59.205108Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Sep 13 01:33:59.210157 waagent[1814]: 2025-09-13T01:33:59.210021Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Sep 13 01:33:59.221068 waagent[1814]: 2025-09-13T01:33:59.220982Z INFO MonitorHandler ExtHandler Network interfaces: Sep 13 01:33:59.221068 waagent[1814]: Executing ['ip', '-a', '-o', 'link']: Sep 13 01:33:59.221068 waagent[1814]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Sep 13 01:33:59.221068 waagent[1814]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7a:69:66 brd ff:ff:ff:ff:ff:ff Sep 13 01:33:59.221068 waagent[1814]: 3: enP41112s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7a:69:66 brd ff:ff:ff:ff:ff:ff\ altname enP41112p0s2 Sep 13 01:33:59.221068 waagent[1814]: Executing ['ip', '-4', '-a', '-o', 'address']: Sep 13 01:33:59.221068 waagent[1814]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Sep 13 01:33:59.221068 waagent[1814]: 2: eth0 inet 10.200.20.24/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Sep 13 01:33:59.221068 waagent[1814]: Executing ['ip', '-6', '-a', '-o', 'address']: Sep 13 01:33:59.221068 waagent[1814]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Sep 13 01:33:59.221068 waagent[1814]: 2: eth0 inet6 fe80::222:48ff:fe7a:6966/64 scope link \ valid_lft forever preferred_lft forever Sep 13 01:33:59.238370 waagent[1814]: 2025-09-13T01:33:59.238278Z INFO ExtHandler ExtHandler Downloading agent manifest Sep 13 01:33:59.253944 waagent[1814]: 2025-09-13T01:33:59.253856Z INFO ExtHandler ExtHandler Sep 13 01:33:59.255143 waagent[1814]: 2025-09-13T01:33:59.255064Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 57051ea5-209a-42bf-be06-4086eecbbaa0 correlation 34579887-df0f-45bf-a810-05c8bd1de0fd created: 2025-09-13T01:32:04.103539Z] Sep 13 01:33:59.258865 waagent[1814]: 2025-09-13T01:33:59.258785Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Sep 13 01:33:59.264061 waagent[1814]: 2025-09-13T01:33:59.263995Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 10 ms] Sep 13 01:33:59.278013 waagent[1814]: 2025-09-13T01:33:59.277934Z INFO EnvHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules Sep 13 01:33:59.286001 waagent[1814]: 2025-09-13T01:33:59.285904Z INFO ExtHandler ExtHandler Looking for existing remote access users. Sep 13 01:33:59.291349 waagent[1814]: 2025-09-13T01:33:59.291244Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.14.0.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 9E546A58-C070-425E-AA10-230420F35B2F;UpdateGSErrors: 0;AutoUpdate: 1;UpdateMode: SelfUpdate;] Sep 13 01:33:59.295205 waagent[1814]: 2025-09-13T01:33:59.295100Z WARNING EnvHandler ExtHandler The firewall is not configured correctly. The following rules are missing: ['ACCEPT DNS', 'DROP'] due to: ['iptables: Bad rule (does a matching rule exist in that chain?).\n', 'iptables: Bad rule (does a matching rule exist in that chain?).\n']. Will reset it. Current state: Sep 13 01:33:59.295205 waagent[1814]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 13 01:33:59.295205 waagent[1814]: pkts bytes target prot opt in out source destination Sep 13 01:33:59.295205 waagent[1814]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 13 01:33:59.295205 waagent[1814]: pkts bytes target prot opt in out source destination Sep 13 01:33:59.295205 waagent[1814]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 13 01:33:59.295205 waagent[1814]: pkts bytes target prot opt in out source destination Sep 13 01:33:59.295205 waagent[1814]: 84 14349 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 13 01:33:59.333031 waagent[1814]: 2025-09-13T01:33:59.332894Z INFO EnvHandler ExtHandler The firewall was setup successfully: Sep 13 01:33:59.333031 waagent[1814]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 13 01:33:59.333031 waagent[1814]: pkts bytes target prot opt in out source destination Sep 13 01:33:59.333031 waagent[1814]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 13 01:33:59.333031 waagent[1814]: pkts bytes target prot opt in out source destination Sep 13 01:33:59.333031 waagent[1814]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 13 01:33:59.333031 waagent[1814]: pkts bytes target prot opt in out source destination Sep 13 01:33:59.333031 waagent[1814]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 13 01:33:59.333031 waagent[1814]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 13 01:33:59.333031 waagent[1814]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 13 01:33:59.336198 waagent[1814]: 2025-09-13T01:33:59.336107Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Sep 13 01:34:02.663861 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 13 01:34:02.664031 systemd[1]: Stopped kubelet.service. Sep 13 01:34:02.665493 systemd[1]: Starting kubelet.service... Sep 13 01:34:02.984522 systemd[1]: Started kubelet.service. Sep 13 01:34:03.031409 kubelet[1866]: E0913 01:34:03.031366 1866 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 01:34:03.033210 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 01:34:03.033355 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 01:34:13.163798 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 13 01:34:13.163973 systemd[1]: Stopped kubelet.service. Sep 13 01:34:13.165422 systemd[1]: Starting kubelet.service... Sep 13 01:34:13.483990 systemd[1]: Started kubelet.service. Sep 13 01:34:13.519297 kubelet[1880]: E0913 01:34:13.519242 1880 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 01:34:13.521021 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 01:34:13.521165 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 01:34:15.287431 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Sep 13 01:34:20.851681 systemd[1]: Created slice system-sshd.slice. Sep 13 01:34:20.852937 systemd[1]: Started sshd@0-10.200.20.24:22-10.200.16.10:37240.service. Sep 13 01:34:21.520837 sshd[1887]: Accepted publickey for core from 10.200.16.10 port 37240 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:34:21.539621 sshd[1887]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:34:21.543550 systemd-logind[1569]: New session 3 of user core. Sep 13 01:34:21.543923 systemd[1]: Started session-3.scope. Sep 13 01:34:21.890626 systemd[1]: Started sshd@1-10.200.20.24:22-10.200.16.10:37254.service. Sep 13 01:34:22.303076 sshd[1892]: Accepted publickey for core from 10.200.16.10 port 37254 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:34:22.304726 sshd[1892]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:34:22.308393 systemd-logind[1569]: New session 4 of user core. Sep 13 01:34:22.308797 systemd[1]: Started session-4.scope. Sep 13 01:34:22.631265 sshd[1892]: pam_unix(sshd:session): session closed for user core Sep 13 01:34:22.633879 systemd[1]: sshd@1-10.200.20.24:22-10.200.16.10:37254.service: Deactivated successfully. Sep 13 01:34:22.634653 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 01:34:22.635713 systemd-logind[1569]: Session 4 logged out. Waiting for processes to exit. Sep 13 01:34:22.636516 systemd-logind[1569]: Removed session 4. Sep 13 01:34:22.697466 systemd[1]: Started sshd@2-10.200.20.24:22-10.200.16.10:37270.service. Sep 13 01:34:23.107719 sshd[1899]: Accepted publickey for core from 10.200.16.10 port 37270 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:34:23.109340 sshd[1899]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:34:23.113049 systemd-logind[1569]: New session 5 of user core. Sep 13 01:34:23.113479 systemd[1]: Started session-5.scope. Sep 13 01:34:23.418428 sshd[1899]: pam_unix(sshd:session): session closed for user core Sep 13 01:34:23.420811 systemd[1]: sshd@2-10.200.20.24:22-10.200.16.10:37270.service: Deactivated successfully. Sep 13 01:34:23.421573 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 01:34:23.422702 systemd-logind[1569]: Session 5 logged out. Waiting for processes to exit. Sep 13 01:34:23.423451 systemd-logind[1569]: Removed session 5. Sep 13 01:34:23.487138 systemd[1]: Started sshd@3-10.200.20.24:22-10.200.16.10:37280.service. Sep 13 01:34:23.663779 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 13 01:34:23.663950 systemd[1]: Stopped kubelet.service. Sep 13 01:34:23.665418 systemd[1]: Starting kubelet.service... Sep 13 01:34:23.828262 systemd[1]: Started kubelet.service. Sep 13 01:34:23.901008 kubelet[1916]: E0913 01:34:23.900940 1916 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 01:34:23.902666 sshd[1906]: Accepted publickey for core from 10.200.16.10 port 37280 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:34:23.903060 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 01:34:23.904027 sshd[1906]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:34:23.903226 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 01:34:23.908478 systemd[1]: Started session-6.scope. Sep 13 01:34:23.909357 systemd-logind[1569]: New session 6 of user core. Sep 13 01:34:24.230730 sshd[1906]: pam_unix(sshd:session): session closed for user core Sep 13 01:34:24.233490 systemd-logind[1569]: Session 6 logged out. Waiting for processes to exit. Sep 13 01:34:24.234253 systemd[1]: sshd@3-10.200.20.24:22-10.200.16.10:37280.service: Deactivated successfully. Sep 13 01:34:24.234994 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 01:34:24.235407 systemd-logind[1569]: Removed session 6. Sep 13 01:34:24.297731 systemd[1]: Started sshd@4-10.200.20.24:22-10.200.16.10:37294.service. Sep 13 01:34:24.708231 sshd[1928]: Accepted publickey for core from 10.200.16.10 port 37294 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:34:24.709819 sshd[1928]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:34:24.713941 systemd[1]: Started session-7.scope. Sep 13 01:34:24.714407 systemd-logind[1569]: New session 7 of user core. Sep 13 01:34:25.403657 sudo[1932]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 13 01:34:25.403880 sudo[1932]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 13 01:34:25.461378 dbus-daemon[1554]: avc: received setenforce notice (enforcing=1) Sep 13 01:34:25.463200 sudo[1932]: pam_unix(sudo:session): session closed for user root Sep 13 01:34:25.554389 sshd[1928]: pam_unix(sshd:session): session closed for user core Sep 13 01:34:25.557873 systemd[1]: sshd@4-10.200.20.24:22-10.200.16.10:37294.service: Deactivated successfully. Sep 13 01:34:25.558642 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 01:34:25.559053 systemd-logind[1569]: Session 7 logged out. Waiting for processes to exit. Sep 13 01:34:25.559842 systemd-logind[1569]: Removed session 7. Sep 13 01:34:25.622773 systemd[1]: Started sshd@5-10.200.20.24:22-10.200.16.10:37302.service. Sep 13 01:34:26.036316 sshd[1936]: Accepted publickey for core from 10.200.16.10 port 37302 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:34:26.037733 sshd[1936]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:34:26.042013 systemd-logind[1569]: New session 8 of user core. Sep 13 01:34:26.042467 systemd[1]: Started session-8.scope. Sep 13 01:34:26.274107 sudo[1941]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 13 01:34:26.274346 sudo[1941]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 13 01:34:26.276838 sudo[1941]: pam_unix(sudo:session): session closed for user root Sep 13 01:34:26.281276 sudo[1940]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 13 01:34:26.281730 sudo[1940]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 13 01:34:26.290144 systemd[1]: Stopping audit-rules.service... Sep 13 01:34:26.290000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Sep 13 01:34:26.295175 kernel: kauditd_printk_skb: 32 callbacks suppressed Sep 13 01:34:26.295275 kernel: audit: type=1305 audit(1757727266.290:166): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Sep 13 01:34:26.295753 auditctl[1944]: No rules Sep 13 01:34:26.296248 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 01:34:26.296481 systemd[1]: Stopped audit-rules.service. Sep 13 01:34:26.298083 systemd[1]: Starting audit-rules.service... Sep 13 01:34:26.290000 audit[1944]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffda25d060 a2=420 a3=0 items=0 ppid=1 pid=1944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:34:26.332817 kernel: audit: type=1300 audit(1757727266.290:166): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffda25d060 a2=420 a3=0 items=0 ppid=1 pid=1944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:34:26.290000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Sep 13 01:34:26.340165 kernel: audit: type=1327 audit(1757727266.290:166): proctitle=2F7362696E2F617564697463746C002D44 Sep 13 01:34:26.295000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:34:26.358681 kernel: audit: type=1131 audit(1757727266.295:167): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:34:26.358755 augenrules[1962]: No rules Sep 13 01:34:26.359298 systemd[1]: Finished audit-rules.service. Sep 13 01:34:26.360898 sudo[1940]: pam_unix(sudo:session): session closed for user root Sep 13 01:34:26.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:34:26.379299 kernel: audit: type=1130 audit(1757727266.358:168): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:34:26.360000 audit[1940]: USER_END pid=1940 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 01:34:26.397897 kernel: audit: type=1106 audit(1757727266.360:169): pid=1940 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 01:34:26.360000 audit[1940]: CRED_DISP pid=1940 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 01:34:26.415195 kernel: audit: type=1104 audit(1757727266.360:170): pid=1940 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 01:34:26.458987 sshd[1936]: pam_unix(sshd:session): session closed for user core Sep 13 01:34:26.458000 audit[1936]: USER_END pid=1936 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:34:26.462229 systemd-logind[1569]: Session 8 logged out. Waiting for processes to exit. Sep 13 01:34:26.463037 systemd[1]: sshd@5-10.200.20.24:22-10.200.16.10:37302.service: Deactivated successfully. Sep 13 01:34:26.463806 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 01:34:26.464930 systemd-logind[1569]: Removed session 8. Sep 13 01:34:26.458000 audit[1936]: CRED_DISP pid=1936 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:34:26.499966 kernel: audit: type=1106 audit(1757727266.458:171): pid=1936 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:34:26.500024 kernel: audit: type=1104 audit(1757727266.458:172): pid=1936 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:34:26.500047 kernel: audit: type=1131 audit(1757727266.462:173): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.20.24:22-10.200.16.10:37302 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:34:26.462000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.20.24:22-10.200.16.10:37302 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:34:26.525623 systemd[1]: Started sshd@6-10.200.20.24:22-10.200.16.10:37314.service. Sep 13 01:34:26.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.24:22-10.200.16.10:37314 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:34:26.935000 audit[1969]: USER_ACCT pid=1969 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:34:26.937141 sshd[1969]: Accepted publickey for core from 10.200.16.10 port 37314 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:34:26.938000 audit[1969]: CRED_ACQ pid=1969 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:34:26.938000 audit[1969]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff2587f80 a2=3 a3=1 items=0 ppid=1 pid=1969 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:34:26.938000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 01:34:26.938614 sshd[1969]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:34:26.942816 systemd[1]: Started session-9.scope. Sep 13 01:34:26.943991 systemd-logind[1569]: New session 9 of user core. Sep 13 01:34:26.947000 audit[1969]: USER_START pid=1969 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:34:26.949000 audit[1972]: CRED_ACQ pid=1972 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:34:26.984090 update_engine[1571]: I0913 01:34:26.984056 1571 update_attempter.cc:509] Updating boot flags... Sep 13 01:34:27.176000 audit[2037]: USER_ACCT pid=2037 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 01:34:27.177000 audit[2037]: CRED_REFR pid=2037 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 01:34:27.177342 sudo[2037]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 01:34:27.177580 sudo[2037]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 13 01:34:27.178000 audit[2037]: USER_START pid=2037 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 01:34:27.214096 systemd[1]: Starting docker.service... Sep 13 01:34:27.267987 env[2049]: time="2025-09-13T01:34:27.267943726Z" level=info msg="Starting up" Sep 13 01:34:27.269546 env[2049]: time="2025-09-13T01:34:27.269513762Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 13 01:34:27.269546 env[2049]: time="2025-09-13T01:34:27.269540522Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 13 01:34:27.269652 env[2049]: time="2025-09-13T01:34:27.269568042Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 13 01:34:27.269652 env[2049]: time="2025-09-13T01:34:27.269578282Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 13 01:34:27.271152 env[2049]: time="2025-09-13T01:34:27.271128558Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 13 01:34:27.271209 env[2049]: time="2025-09-13T01:34:27.271152358Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 13 01:34:27.271209 env[2049]: time="2025-09-13T01:34:27.271167518Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 13 01:34:27.271209 env[2049]: time="2025-09-13T01:34:27.271176598Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 13 01:34:27.276514 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2158065716-merged.mount: Deactivated successfully. Sep 13 01:34:27.413703 env[2049]: time="2025-09-13T01:34:27.413664116Z" level=warning msg="Your kernel does not support cgroup blkio weight" Sep 13 01:34:27.413703 env[2049]: time="2025-09-13T01:34:27.413692476Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Sep 13 01:34:27.413922 env[2049]: time="2025-09-13T01:34:27.413875756Z" level=info msg="Loading containers: start." Sep 13 01:34:27.516000 audit[2075]: NETFILTER_CFG table=nat:7 family=2 entries=2 op=nft_register_chain pid=2075 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 01:34:27.516000 audit[2075]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffcd8a3400 a2=0 a3=1 items=0 ppid=2049 pid=2075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:34:27.516000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Sep 13 01:34:27.518000 audit[2077]: NETFILTER_CFG table=filter:8 family=2 entries=2 op=nft_register_chain pid=2077 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 01:34:27.518000 audit[2077]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffdf1744b0 a2=0 a3=1 items=0 ppid=2049 pid=2077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:34:27.518000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Sep 13 01:34:27.520000 audit[2079]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_chain pid=2079 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 01:34:27.520000 audit[2079]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffcccab370 a2=0 a3=1 items=0 ppid=2049 pid=2079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:34:27.520000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Sep 13 01:34:27.522000 audit[2081]: NETFILTER_CFG table=filter:10 family=2 entries=1 op=nft_register_chain pid=2081 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 01:34:27.522000 audit[2081]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffff68507a0 a2=0 a3=1 items=0 ppid=2049 pid=2081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:34:27.522000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Sep 13 01:34:27.523000 audit[2083]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_register_rule pid=2083 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 01:34:27.523000 audit[2083]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffff536e760 a2=0 a3=1 items=0 ppid=2049 pid=2083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:34:27.523000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Sep 13 01:34:27.525000 audit[2085]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2085 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 01:34:27.525000 audit[2085]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffdf977100 a2=0 a3=1 items=0 ppid=2049 pid=2085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:34:27.525000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Sep 13 01:34:27.545000 audit[2087]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_chain pid=2087 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 01:34:27.545000 audit[2087]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffec799d40 a2=0 a3=1 items=0 ppid=2049 pid=2087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:34:27.545000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Sep 13 01:34:27.547000 audit[2089]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=2089 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 01:34:27.547000 audit[2089]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffc886f2e0 a2=0 a3=1 items=0 ppid=2049 pid=2089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:34:27.547000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Sep 13 01:34:27.548000 audit[2091]: NETFILTER_CFG table=filter:15 family=2 entries=2 op=nft_register_chain pid=2091 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 01:34:27.548000 audit[2091]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=308 a0=3 a1=fffff1531550 a2=0 a3=1 items=0 ppid=2049 pid=2091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:34:27.548000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Sep 13 01:34:27.568000 audit[2095]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_unregister_rule pid=2095 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 01:34:27.568000 audit[2095]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=fffff3470d60 a2=0 a3=1 items=0 ppid=2049 pid=2095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:34:27.568000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Sep 13 01:34:27.576000 audit[2096]: NETFILTER_CFG table=filter:17 family=2 entries=1 op=nft_register_rule pid=2096 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 01:34:27.576000 audit[2096]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffedcb10d0 a2=0 a3=1 items=0 ppid=2049 pid=2096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:34:27.576000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Sep 13 01:34:27.661216 kernel: Initializing XFRM netlink socket Sep 13 01:34:27.699654 env[2049]: time="2025-09-13T01:34:27.699620471Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 13 01:34:27.852000 audit[2104]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=2104 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 01:34:27.852000 audit[2104]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=492 a0=3 a1=ffffe63507c0 a2=0 a3=1 items=0 ppid=2049 pid=2104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:34:27.852000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Sep 13 01:34:27.893000 audit[2107]: NETFILTER_CFG table=nat:19 family=2 entries=1 op=nft_register_rule pid=2107 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 01:34:27.893000 audit[2107]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=fffffda34330 a2=0 a3=1 items=0 ppid=2049 pid=2107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:34:27.893000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Sep 13 01:34:27.896000 audit[2110]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=2110 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 01:34:27.896000 audit[2110]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffeeb52fb0 a2=0 a3=1 items=0 ppid=2049 pid=2110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:34:27.896000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Sep 13 01:34:27.898000 audit[2112]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=2112 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 01:34:27.898000 audit[2112]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffddbc24b0 a2=0 a3=1 items=0 ppid=2049 pid=2112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:34:27.898000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Sep 13 01:34:27.900000 audit[2114]: NETFILTER_CFG table=nat:22 family=2 entries=2 op=nft_register_chain pid=2114 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 01:34:27.900000 audit[2114]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=356 a0=3 a1=fffff488dda0 a2=0 a3=1 items=0 ppid=2049 pid=2114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:34:27.900000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Sep 13 01:34:27.902000 audit[2116]: NETFILTER_CFG table=nat:23 family=2 entries=2 op=nft_register_chain pid=2116 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 01:34:27.902000 audit[2116]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=444 a0=3 a1=ffffe312e6e0 a2=0 a3=1 items=0 ppid=2049 pid=2116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:34:27.902000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Sep 13 01:34:27.904000 audit[2118]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_register_rule pid=2118 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 01:34:27.904000 audit[2118]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=304 a0=3 a1=ffffd069c500 a2=0 a3=1 items=0 ppid=2049 pid=2118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:34:27.904000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Sep 13 01:34:27.905000 audit[2120]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=2120 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 01:34:27.905000 audit[2120]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=508 a0=3 a1=ffffd34ca730 a2=0 a3=1 items=0 ppid=2049 pid=2120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:34:27.905000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Sep 13 01:34:27.908000 audit[2122]: NETFILTER_CFG table=filter:26 family=2 entries=1 op=nft_register_rule pid=2122 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 01:34:27.908000 audit[2122]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=240 a0=3 a1=ffffe4b81e60 a2=0 a3=1 items=0 ppid=2049 pid=2122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:34:27.908000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Sep 13 01:34:27.908000 audit[2124]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_rule pid=2124 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 01:34:27.908000 audit[2124]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=fffffa458af0 a2=0 a3=1 items=0 ppid=2049 pid=2124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:34:27.908000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Sep 13 01:34:27.910000 audit[2126]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_rule pid=2126 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 01:34:27.910000 audit[2126]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffde0f8180 a2=0 a3=1 items=0 ppid=2049 pid=2126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:34:27.910000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Sep 13 01:34:27.912381 systemd-networkd[1771]: docker0: Link UP Sep 13 01:34:27.935000 audit[2130]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_unregister_rule pid=2130 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 01:34:27.935000 audit[2130]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffff0bfb4e0 a2=0 a3=1 items=0 ppid=2049 pid=2130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:34:27.935000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Sep 13 01:34:27.943000 audit[2131]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2131 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 01:34:27.943000 audit[2131]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffdbd5ce70 a2=0 a3=1 items=0 ppid=2049 pid=2131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:34:27.943000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Sep 13 01:34:27.945693 env[2049]: time="2025-09-13T01:34:27.945655726Z" level=info msg="Loading containers: done." Sep 13 01:34:27.956650 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1993932380-merged.mount: Deactivated successfully. Sep 13 01:34:27.999311 env[2049]: time="2025-09-13T01:34:27.999266270Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 01:34:27.999533 env[2049]: time="2025-09-13T01:34:27.999462830Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Sep 13 01:34:27.999593 env[2049]: time="2025-09-13T01:34:27.999570309Z" level=info msg="Daemon has completed initialization" Sep 13 01:34:28.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:34:28.041352 systemd[1]: Started docker.service. Sep 13 01:34:28.044740 env[2049]: time="2025-09-13T01:34:28.044551042Z" level=info msg="API listen on /run/docker.sock" Sep 13 01:34:31.849525 env[1589]: time="2025-09-13T01:34:31.849476460Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 13 01:34:32.681623 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2638429603.mount: Deactivated successfully. Sep 13 01:34:33.937856 kernel: kauditd_printk_skb: 84 callbacks suppressed Sep 13 01:34:33.937974 kernel: audit: type=1130 audit(1757727273.913:208): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:34:33.913000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:34:33.913764 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 13 01:34:33.913942 systemd[1]: Stopped kubelet.service. Sep 13 01:34:33.915621 systemd[1]: Starting kubelet.service... Sep 13 01:34:33.954508 kernel: audit: type=1131 audit(1757727273.913:209): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:34:33.913000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:34:34.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:34:34.047963 systemd[1]: Started kubelet.service. Sep 13 01:34:34.079207 kernel: audit: type=1130 audit(1757727274.048:210): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:34:34.132567 kubelet[2170]: E0913 01:34:34.132525 2170 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 01:34:34.137000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 13 01:34:34.137582 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 01:34:34.137718 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 01:34:34.156211 kernel: audit: type=1131 audit(1757727274.137:211): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 13 01:34:34.468219 env[1589]: time="2025-09-13T01:34:34.467403928Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:34.474616 env[1589]: time="2025-09-13T01:34:34.474574837Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:34.479251 env[1589]: time="2025-09-13T01:34:34.479218429Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:34.487119 env[1589]: time="2025-09-13T01:34:34.487072457Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:34.487813 env[1589]: time="2025-09-13T01:34:34.487784736Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\"" Sep 13 01:34:34.489431 env[1589]: time="2025-09-13T01:34:34.489407773Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 13 01:34:35.878768 env[1589]: time="2025-09-13T01:34:35.878693616Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:35.886125 env[1589]: time="2025-09-13T01:34:35.886075525Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:35.890558 env[1589]: time="2025-09-13T01:34:35.890523478Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:35.896518 env[1589]: time="2025-09-13T01:34:35.896486509Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:35.897148 env[1589]: time="2025-09-13T01:34:35.897119308Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\"" Sep 13 01:34:35.897685 env[1589]: time="2025-09-13T01:34:35.897647067Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 13 01:34:37.069262 env[1589]: time="2025-09-13T01:34:37.069217080Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:37.076331 env[1589]: time="2025-09-13T01:34:37.076291932Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:37.080710 env[1589]: time="2025-09-13T01:34:37.080675915Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:37.085093 env[1589]: time="2025-09-13T01:34:37.085068177Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:37.086092 env[1589]: time="2025-09-13T01:34:37.086066693Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\"" Sep 13 01:34:37.087197 env[1589]: time="2025-09-13T01:34:37.087154049Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 13 01:34:38.305521 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount785313277.mount: Deactivated successfully. Sep 13 01:34:38.795920 env[1589]: time="2025-09-13T01:34:38.795851299Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:38.803166 env[1589]: time="2025-09-13T01:34:38.803129711Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:38.806751 env[1589]: time="2025-09-13T01:34:38.806724737Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:38.811332 env[1589]: time="2025-09-13T01:34:38.811296640Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:38.811821 env[1589]: time="2025-09-13T01:34:38.811794518Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\"" Sep 13 01:34:38.812390 env[1589]: time="2025-09-13T01:34:38.812365116Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 13 01:34:39.493771 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2508070894.mount: Deactivated successfully. Sep 13 01:34:41.040212 env[1589]: time="2025-09-13T01:34:41.040155367Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:41.047857 env[1589]: time="2025-09-13T01:34:41.047819700Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:41.053099 env[1589]: time="2025-09-13T01:34:41.053063202Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:41.060888 env[1589]: time="2025-09-13T01:34:41.060849894Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:41.061939 env[1589]: time="2025-09-13T01:34:41.061911371Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 13 01:34:41.063277 env[1589]: time="2025-09-13T01:34:41.063252166Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 01:34:41.633362 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1834288348.mount: Deactivated successfully. Sep 13 01:34:41.658376 env[1589]: time="2025-09-13T01:34:41.658320555Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:41.666139 env[1589]: time="2025-09-13T01:34:41.666096487Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:41.670970 env[1589]: time="2025-09-13T01:34:41.670941110Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:41.675046 env[1589]: time="2025-09-13T01:34:41.675010096Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:41.675638 env[1589]: time="2025-09-13T01:34:41.675603694Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 13 01:34:41.677490 env[1589]: time="2025-09-13T01:34:41.677460567Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 13 01:34:42.305365 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2309535041.mount: Deactivated successfully. Sep 13 01:34:44.163744 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Sep 13 01:34:44.163914 systemd[1]: Stopped kubelet.service. Sep 13 01:34:44.165418 systemd[1]: Starting kubelet.service... Sep 13 01:34:44.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:34:44.163000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:34:44.202777 kernel: audit: type=1130 audit(1757727284.163:212): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:34:44.202931 kernel: audit: type=1131 audit(1757727284.163:213): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:34:44.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:34:44.292361 systemd[1]: Started kubelet.service. Sep 13 01:34:44.322215 kernel: audit: type=1130 audit(1757727284.292:214): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:34:44.382175 kubelet[2186]: E0913 01:34:44.382120 2186 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 01:34:44.383834 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 01:34:44.383980 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 01:34:44.384000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 13 01:34:44.407213 kernel: audit: type=1131 audit(1757727284.384:215): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 13 01:34:46.080386 env[1589]: time="2025-09-13T01:34:46.080339300Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:46.087637 env[1589]: time="2025-09-13T01:34:46.087597038Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:46.093365 env[1589]: time="2025-09-13T01:34:46.093329300Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:46.098823 env[1589]: time="2025-09-13T01:34:46.098784243Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:46.099623 env[1589]: time="2025-09-13T01:34:46.099588121Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 13 01:34:50.357669 systemd[1]: Stopped kubelet.service. Sep 13 01:34:50.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:34:50.364483 systemd[1]: Starting kubelet.service... Sep 13 01:34:50.362000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:34:50.398224 kernel: audit: type=1130 audit(1757727290.357:216): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:34:50.398420 kernel: audit: type=1131 audit(1757727290.362:217): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:34:50.415910 systemd[1]: Reloading. Sep 13 01:34:50.481452 /usr/lib/systemd/system-generators/torcx-generator[2244]: time="2025-09-13T01:34:50Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 01:34:50.481815 /usr/lib/systemd/system-generators/torcx-generator[2244]: time="2025-09-13T01:34:50Z" level=info msg="torcx already run" Sep 13 01:34:50.586036 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 01:34:50.586056 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 01:34:50.601786 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 01:34:50.700836 systemd[1]: Started kubelet.service. Sep 13 01:34:50.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:34:50.720598 kernel: audit: type=1130 audit(1757727290.700:218): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:34:50.720239 systemd[1]: Stopping kubelet.service... Sep 13 01:34:50.721569 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 01:34:50.721821 systemd[1]: Stopped kubelet.service. Sep 13 01:34:50.724297 systemd[1]: Starting kubelet.service... Sep 13 01:34:50.721000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:34:50.744348 kernel: audit: type=1131 audit(1757727290.721:219): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:34:50.895550 systemd[1]: Started kubelet.service. Sep 13 01:34:50.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:34:50.920245 kernel: audit: type=1130 audit(1757727290.895:220): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:34:50.953304 kubelet[2324]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 01:34:50.953304 kubelet[2324]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 01:34:50.953304 kubelet[2324]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 01:34:50.953304 kubelet[2324]: I0913 01:34:50.952860 2324 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 01:34:51.450887 kubelet[2324]: I0913 01:34:51.450851 2324 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 01:34:51.451052 kubelet[2324]: I0913 01:34:51.451042 2324 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 01:34:51.451377 kubelet[2324]: I0913 01:34:51.451361 2324 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 01:34:51.481945 kubelet[2324]: E0913 01:34:51.481904 2324 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.24:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.24:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:34:51.485918 kubelet[2324]: I0913 01:34:51.485869 2324 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 01:34:51.493124 kubelet[2324]: E0913 01:34:51.493094 2324 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 01:34:51.493303 kubelet[2324]: I0913 01:34:51.493289 2324 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 01:34:51.497096 kubelet[2324]: I0913 01:34:51.497074 2324 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 01:34:51.498191 kubelet[2324]: I0913 01:34:51.498164 2324 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 01:34:51.498433 kubelet[2324]: I0913 01:34:51.498402 2324 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 01:34:51.498682 kubelet[2324]: I0913 01:34:51.498499 2324 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-9d226ffbbf","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 13 01:34:51.498812 kubelet[2324]: I0913 01:34:51.498800 2324 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 01:34:51.498874 kubelet[2324]: I0913 01:34:51.498865 2324 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 01:34:51.499034 kubelet[2324]: I0913 01:34:51.499023 2324 state_mem.go:36] "Initialized new in-memory state store" Sep 13 01:34:51.503055 kubelet[2324]: I0913 01:34:51.503025 2324 kubelet.go:408] "Attempting to sync node with API server" Sep 13 01:34:51.503170 kubelet[2324]: I0913 01:34:51.503159 2324 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 01:34:51.503293 kubelet[2324]: I0913 01:34:51.503283 2324 kubelet.go:314] "Adding apiserver pod source" Sep 13 01:34:51.503358 kubelet[2324]: I0913 01:34:51.503349 2324 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 01:34:51.511737 kubelet[2324]: W0913 01:34:51.511575 2324 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-9d226ffbbf&limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused Sep 13 01:34:51.511737 kubelet[2324]: E0913 01:34:51.511657 2324 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-9d226ffbbf&limit=500&resourceVersion=0\": dial tcp 10.200.20.24:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:34:51.512082 kubelet[2324]: W0913 01:34:51.512030 2324 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.24:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused Sep 13 01:34:51.512141 kubelet[2324]: E0913 01:34:51.512080 2324 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.24:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.24:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:34:51.512201 kubelet[2324]: I0913 01:34:51.512168 2324 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 01:34:51.512690 kubelet[2324]: I0913 01:34:51.512658 2324 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 01:34:51.512745 kubelet[2324]: W0913 01:34:51.512712 2324 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 01:34:51.513413 kubelet[2324]: I0913 01:34:51.513390 2324 server.go:1274] "Started kubelet" Sep 13 01:34:51.521000 audit[2324]: AVC avc: denied { mac_admin } for pid=2324 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:34:51.522275 kubelet[2324]: I0913 01:34:51.522249 2324 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Sep 13 01:34:51.522394 kubelet[2324]: I0913 01:34:51.522380 2324 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Sep 13 01:34:51.522527 kubelet[2324]: I0913 01:34:51.522516 2324 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 01:34:51.527556 kubelet[2324]: I0913 01:34:51.527521 2324 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 01:34:51.528505 kubelet[2324]: I0913 01:34:51.528488 2324 server.go:449] "Adding debug handlers to kubelet server" Sep 13 01:34:51.529514 kubelet[2324]: I0913 01:34:51.529469 2324 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 01:34:51.529794 kubelet[2324]: I0913 01:34:51.529779 2324 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 01:34:51.521000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 01:34:51.541234 kubelet[2324]: E0913 01:34:51.541210 2324 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 01:34:51.541594 kubelet[2324]: I0913 01:34:51.541576 2324 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 01:34:51.545517 kubelet[2324]: E0913 01:34:51.545484 2324 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-9d226ffbbf\" not found" Sep 13 01:34:51.545663 kubelet[2324]: I0913 01:34:51.545652 2324 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 01:34:51.545946 kubelet[2324]: I0913 01:34:51.545927 2324 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 01:34:51.546141 kubelet[2324]: I0913 01:34:51.546128 2324 reconciler.go:26] "Reconciler: start to sync state" Sep 13 01:34:51.546830 kubelet[2324]: I0913 01:34:51.546810 2324 factory.go:221] Registration of the systemd container factory successfully Sep 13 01:34:51.547040 kubelet[2324]: I0913 01:34:51.547021 2324 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 01:34:51.547534 kubelet[2324]: W0913 01:34:51.547499 2324 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused Sep 13 01:34:51.547644 kubelet[2324]: E0913 01:34:51.547625 2324 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.24:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:34:51.550318 kernel: audit: type=1400 audit(1757727291.521:221): avc: denied { mac_admin } for pid=2324 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:34:51.550407 kernel: audit: type=1401 audit(1757727291.521:221): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 01:34:51.521000 audit[2324]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=40009121b0 a1=400087afa8 a2=4000912180 a3=25 items=0 ppid=1 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:34:51.575573 kernel: audit: type=1300 audit(1757727291.521:221): arch=c00000b7 syscall=5 success=no exit=-22 a0=40009121b0 a1=400087afa8 a2=4000912180 a3=25 items=0 ppid=1 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:34:51.575671 kernel: audit: type=1327 audit(1757727291.521:221): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 01:34:51.521000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 01:34:51.599257 kernel: audit: type=1400 audit(1757727291.522:222): avc: denied { mac_admin } for pid=2324 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:34:51.522000 audit[2324]: AVC avc: denied { mac_admin } for pid=2324 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:34:51.522000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 01:34:51.522000 audit[2324]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000acb160 a1=400087afc0 a2=4000912240 a3=25 items=0 ppid=1 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:34:51.522000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 01:34:51.525000 audit[2336]: NETFILTER_CFG table=mangle:31 family=2 entries=2 op=nft_register_chain pid=2336 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 01:34:51.525000 audit[2336]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffc59f3df0 a2=0 a3=1 items=0 ppid=2324 pid=2336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:34:51.525000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Sep 13 01:34:51.526000 audit[2337]: NETFILTER_CFG table=filter:32 family=2 entries=1 op=nft_register_chain pid=2337 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 01:34:51.526000 audit[2337]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcc00cd80 a2=0 a3=1 items=0 ppid=2324 pid=2337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:34:51.526000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Sep 13 01:34:51.618000 audit[2339]: NETFILTER_CFG table=filter:33 family=2 entries=2 op=nft_register_chain pid=2339 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 01:34:51.618000 audit[2339]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffc101b220 a2=0 a3=1 items=0 ppid=2324 pid=2339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:34:51.618000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 13 01:34:51.621023 kubelet[2324]: E0913 01:34:51.620990 2324 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-9d226ffbbf?timeout=10s\": dial tcp 10.200.20.24:6443: connect: connection refused" interval="200ms" Sep 13 01:34:51.620000 audit[2341]: NETFILTER_CFG table=filter:34 family=2 entries=2 op=nft_register_chain pid=2341 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 01:34:51.620000 audit[2341]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffe193a360 a2=0 a3=1 items=0 ppid=2324 pid=2341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:34:51.620000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 13 01:34:51.622029 kubelet[2324]: I0913 01:34:51.622010 2324 factory.go:221] Registration of the containerd container factory successfully Sep 13 01:34:51.642070 kubelet[2324]: E0913 01:34:51.640884 2324 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.24:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.24:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.8-n-9d226ffbbf.1864b3a8a6a9f026 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.8-n-9d226ffbbf,UID:ci-3510.3.8-n-9d226ffbbf,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.8-n-9d226ffbbf,},FirstTimestamp:2025-09-13 01:34:51.513368614 +0000 UTC m=+0.600626907,LastTimestamp:2025-09-13 01:34:51.513368614 +0000 UTC m=+0.600626907,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.8-n-9d226ffbbf,}" Sep 13 01:34:51.646170 kubelet[2324]: E0913 01:34:51.645922 2324 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-9d226ffbbf\" not found" Sep 13 01:34:51.687762 kubelet[2324]: I0913 01:34:51.687700 2324 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 01:34:51.687913 kubelet[2324]: I0913 01:34:51.687900 2324 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 01:34:51.688023 kubelet[2324]: I0913 01:34:51.688013 2324 state_mem.go:36] "Initialized new in-memory state store" Sep 13 01:34:51.698449 kubelet[2324]: I0913 01:34:51.698345 2324 policy_none.go:49] "None policy: Start" Sep 13 01:34:51.699068 kubelet[2324]: I0913 01:34:51.699044 2324 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 01:34:51.699068 kubelet[2324]: I0913 01:34:51.699071 2324 state_mem.go:35] "Initializing new in-memory state store" Sep 13 01:34:51.699000 audit[2347]: NETFILTER_CFG table=filter:35 family=2 entries=1 op=nft_register_rule pid=2347 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 01:34:51.699000 audit[2347]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffc0a20590 a2=0 a3=1 items=0 ppid=2324 pid=2347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:34:51.699000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Sep 13 01:34:51.700333 kubelet[2324]: I0913 01:34:51.700310 2324 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 01:34:51.700000 audit[2348]: NETFILTER_CFG table=mangle:36 family=10 entries=2 op=nft_register_chain pid=2348 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 01:34:51.700000 audit[2348]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffffbfe26d0 a2=0 a3=1 items=0 ppid=2324 pid=2348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:34:51.700000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Sep 13 01:34:51.701000 audit[2349]: NETFILTER_CFG table=mangle:37 family=2 entries=1 op=nft_register_chain pid=2349 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 01:34:51.701000 audit[2349]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff71985f0 a2=0 a3=1 items=0 ppid=2324 pid=2349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:34:51.701000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Sep 13 01:34:51.702000 audit[2350]: NETFILTER_CFG table=nat:38 family=2 entries=1 op=nft_register_chain pid=2350 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 01:34:51.702000 audit[2350]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdaec5d30 a2=0 a3=1 items=0 ppid=2324 pid=2350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:34:51.702000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Sep 13 01:34:51.704565 kubelet[2324]: I0913 01:34:51.701334 2324 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 01:34:51.704565 kubelet[2324]: I0913 01:34:51.701355 2324 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 01:34:51.704565 kubelet[2324]: I0913 01:34:51.701373 2324 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 01:34:51.704565 kubelet[2324]: E0913 01:34:51.701412 2324 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 01:34:51.707259 kubelet[2324]: I0913 01:34:51.707234 2324 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 01:34:51.707000 audit[2324]: AVC avc: denied { mac_admin } for pid=2324 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:34:51.707000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 01:34:51.707000 audit[2324]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=40008c7080 a1=400003be00 a2=40008c7050 a3=25 items=0 ppid=1 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:34:51.707000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 01:34:51.707517 kubelet[2324]: I0913 01:34:51.707422 2324 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Sep 13 01:34:51.707549 kubelet[2324]: I0913 01:34:51.707535 2324 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 01:34:51.707573 kubelet[2324]: I0913 01:34:51.707545 2324 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 01:34:51.707000 audit[2351]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_chain pid=2351 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 01:34:51.707000 audit[2351]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffef065ff0 a2=0 a3=1 items=0 ppid=2324 pid=2351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:34:51.707000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Sep 13 01:34:51.708000 audit[2352]: NETFILTER_CFG table=mangle:40 family=10 entries=1 op=nft_register_chain pid=2352 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 01:34:51.708000 audit[2352]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd2fa9770 a2=0 a3=1 items=0 ppid=2324 pid=2352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:34:51.708000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Sep 13 01:34:51.709681 kubelet[2324]: I0913 01:34:51.709650 2324 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 01:34:51.709000 audit[2353]: NETFILTER_CFG table=nat:41 family=10 entries=2 op=nft_register_chain pid=2353 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 01:34:51.709000 audit[2353]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=128 a0=3 a1=fffff22e9a80 a2=0 a3=1 items=0 ppid=2324 pid=2353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:34:51.709000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Sep 13 01:34:51.710000 audit[2354]: NETFILTER_CFG table=filter:42 family=10 entries=2 op=nft_register_chain pid=2354 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 01:34:51.710000 audit[2354]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffedbb2ba0 a2=0 a3=1 items=0 ppid=2324 pid=2354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:34:51.710000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Sep 13 01:34:51.711284 kubelet[2324]: E0913 01:34:51.711107 2324 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.8-n-9d226ffbbf\" not found" Sep 13 01:34:51.713533 kubelet[2324]: W0913 01:34:51.713488 2324 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused Sep 13 01:34:51.713624 kubelet[2324]: E0913 01:34:51.713533 2324 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.24:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:34:51.808686 kubelet[2324]: I0913 01:34:51.808662 2324 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:34:51.809418 kubelet[2324]: E0913 01:34:51.809394 2324 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.24:6443/api/v1/nodes\": dial tcp 10.200.20.24:6443: connect: connection refused" node="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:34:51.822259 kubelet[2324]: E0913 01:34:51.822220 2324 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-9d226ffbbf?timeout=10s\": dial tcp 10.200.20.24:6443: connect: connection refused" interval="400ms" Sep 13 01:34:51.848785 kubelet[2324]: I0913 01:34:51.848746 2324 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b2b0e8ba7c8ad2f53edabaa644d601b1-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-9d226ffbbf\" (UID: \"b2b0e8ba7c8ad2f53edabaa644d601b1\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-9d226ffbbf" Sep 13 01:34:51.848906 kubelet[2324]: I0913 01:34:51.848835 2324 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b2b0e8ba7c8ad2f53edabaa644d601b1-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-9d226ffbbf\" (UID: \"b2b0e8ba7c8ad2f53edabaa644d601b1\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-9d226ffbbf" Sep 13 01:34:51.848906 kubelet[2324]: I0913 01:34:51.848857 2324 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cf87a2cae414fdb6a1f6926f94fa2fe9-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-9d226ffbbf\" (UID: \"cf87a2cae414fdb6a1f6926f94fa2fe9\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-9d226ffbbf" Sep 13 01:34:51.848906 kubelet[2324]: I0913 01:34:51.848874 2324 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cf87a2cae414fdb6a1f6926f94fa2fe9-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-9d226ffbbf\" (UID: \"cf87a2cae414fdb6a1f6926f94fa2fe9\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-9d226ffbbf" Sep 13 01:34:51.849000 kubelet[2324]: I0913 01:34:51.848919 2324 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cf87a2cae414fdb6a1f6926f94fa2fe9-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-9d226ffbbf\" (UID: \"cf87a2cae414fdb6a1f6926f94fa2fe9\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-9d226ffbbf" Sep 13 01:34:51.849000 kubelet[2324]: I0913 01:34:51.848936 2324 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cf87a2cae414fdb6a1f6926f94fa2fe9-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-9d226ffbbf\" (UID: \"cf87a2cae414fdb6a1f6926f94fa2fe9\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-9d226ffbbf" Sep 13 01:34:51.849000 kubelet[2324]: I0913 01:34:51.848950 2324 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cf87a2cae414fdb6a1f6926f94fa2fe9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-9d226ffbbf\" (UID: \"cf87a2cae414fdb6a1f6926f94fa2fe9\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-9d226ffbbf" Sep 13 01:34:51.849000 kubelet[2324]: I0913 01:34:51.848965 2324 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5d5a23e06f0f1ccd426aac14d1f98d55-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-9d226ffbbf\" (UID: \"5d5a23e06f0f1ccd426aac14d1f98d55\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-9d226ffbbf" Sep 13 01:34:51.849089 kubelet[2324]: I0913 01:34:51.849008 2324 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b2b0e8ba7c8ad2f53edabaa644d601b1-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-9d226ffbbf\" (UID: \"b2b0e8ba7c8ad2f53edabaa644d601b1\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-9d226ffbbf" Sep 13 01:34:52.012706 kubelet[2324]: I0913 01:34:52.011642 2324 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:34:52.012706 kubelet[2324]: E0913 01:34:52.012489 2324 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.24:6443/api/v1/nodes\": dial tcp 10.200.20.24:6443: connect: connection refused" node="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:34:52.110997 env[1589]: time="2025-09-13T01:34:52.110747421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-9d226ffbbf,Uid:b2b0e8ba7c8ad2f53edabaa644d601b1,Namespace:kube-system,Attempt:0,}" Sep 13 01:34:52.111919 env[1589]: time="2025-09-13T01:34:52.111893418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-9d226ffbbf,Uid:cf87a2cae414fdb6a1f6926f94fa2fe9,Namespace:kube-system,Attempt:0,}" Sep 13 01:34:52.112453 env[1589]: time="2025-09-13T01:34:52.112421777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-9d226ffbbf,Uid:5d5a23e06f0f1ccd426aac14d1f98d55,Namespace:kube-system,Attempt:0,}" Sep 13 01:34:52.223455 kubelet[2324]: E0913 01:34:52.223410 2324 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-9d226ffbbf?timeout=10s\": dial tcp 10.200.20.24:6443: connect: connection refused" interval="800ms" Sep 13 01:34:52.414551 kubelet[2324]: W0913 01:34:52.414199 2324 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.24:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused Sep 13 01:34:52.414551 kubelet[2324]: E0913 01:34:52.414263 2324 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.24:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.24:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:34:52.415479 kubelet[2324]: I0913 01:34:52.415254 2324 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:34:52.415576 kubelet[2324]: E0913 01:34:52.415545 2324 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.24:6443/api/v1/nodes\": dial tcp 10.200.20.24:6443: connect: connection refused" node="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:34:52.742292 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1762184802.mount: Deactivated successfully. Sep 13 01:34:52.764600 env[1589]: time="2025-09-13T01:34:52.764559477Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:52.773064 kubelet[2324]: W0913 01:34:52.772944 2324 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused Sep 13 01:34:52.773064 kubelet[2324]: E0913 01:34:52.773023 2324 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.24:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:34:52.787457 env[1589]: time="2025-09-13T01:34:52.787393257Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:52.793968 env[1589]: time="2025-09-13T01:34:52.793926280Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:52.799719 env[1589]: time="2025-09-13T01:34:52.799686465Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:52.807487 env[1589]: time="2025-09-13T01:34:52.807452725Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:52.814543 env[1589]: time="2025-09-13T01:34:52.814508467Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:52.817676 env[1589]: time="2025-09-13T01:34:52.817637538Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:52.820786 env[1589]: time="2025-09-13T01:34:52.820757490Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:52.824066 env[1589]: time="2025-09-13T01:34:52.824027802Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:52.826875 env[1589]: time="2025-09-13T01:34:52.826848274Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:52.838495 env[1589]: time="2025-09-13T01:34:52.838455204Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:52.841679 env[1589]: time="2025-09-13T01:34:52.841630756Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:52.895625 env[1589]: time="2025-09-13T01:34:52.895532935Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:34:52.895625 env[1589]: time="2025-09-13T01:34:52.895576775Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:34:52.895625 env[1589]: time="2025-09-13T01:34:52.895587815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:34:52.896162 env[1589]: time="2025-09-13T01:34:52.896086094Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a1ee3711c1eeecb6a79f9bc7a33a30a7ea31f6a60a79cae78592f26025231bab pid=2363 runtime=io.containerd.runc.v2 Sep 13 01:34:52.935368 env[1589]: time="2025-09-13T01:34:52.934762593Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:34:52.935368 env[1589]: time="2025-09-13T01:34:52.934801353Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:34:52.935368 env[1589]: time="2025-09-13T01:34:52.934811473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:34:52.935368 env[1589]: time="2025-09-13T01:34:52.935049472Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/07f05b3d7502bfa944281af8acab8fcd6108ba4400e2f48cbea4b6ffb2337e0a pid=2402 runtime=io.containerd.runc.v2 Sep 13 01:34:52.951225 env[1589]: time="2025-09-13T01:34:52.951132070Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:34:52.951418 env[1589]: time="2025-09-13T01:34:52.951176950Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:34:52.951418 env[1589]: time="2025-09-13T01:34:52.951223830Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:34:52.951508 env[1589]: time="2025-09-13T01:34:52.951443710Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2650ebe9c839df059cfb63d0d509fa3309567c09c171b6e9f8a7aff8a7dfdb22 pid=2416 runtime=io.containerd.runc.v2 Sep 13 01:34:52.960054 env[1589]: time="2025-09-13T01:34:52.959982647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-9d226ffbbf,Uid:5d5a23e06f0f1ccd426aac14d1f98d55,Namespace:kube-system,Attempt:0,} returns sandbox id \"a1ee3711c1eeecb6a79f9bc7a33a30a7ea31f6a60a79cae78592f26025231bab\"" Sep 13 01:34:52.967679 env[1589]: time="2025-09-13T01:34:52.967637747Z" level=info msg="CreateContainer within sandbox \"a1ee3711c1eeecb6a79f9bc7a33a30a7ea31f6a60a79cae78592f26025231bab\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 01:34:52.972600 kubelet[2324]: W0913 01:34:52.972517 2324 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused Sep 13 01:34:52.972600 kubelet[2324]: E0913 01:34:52.972561 2324 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.24:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:34:53.000146 env[1589]: time="2025-09-13T01:34:52.999400625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-9d226ffbbf,Uid:b2b0e8ba7c8ad2f53edabaa644d601b1,Namespace:kube-system,Attempt:0,} returns sandbox id \"07f05b3d7502bfa944281af8acab8fcd6108ba4400e2f48cbea4b6ffb2337e0a\"" Sep 13 01:34:53.004616 env[1589]: time="2025-09-13T01:34:53.004578491Z" level=info msg="CreateContainer within sandbox \"07f05b3d7502bfa944281af8acab8fcd6108ba4400e2f48cbea4b6ffb2337e0a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 01:34:53.011811 env[1589]: time="2025-09-13T01:34:53.011758313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-9d226ffbbf,Uid:cf87a2cae414fdb6a1f6926f94fa2fe9,Namespace:kube-system,Attempt:0,} returns sandbox id \"2650ebe9c839df059cfb63d0d509fa3309567c09c171b6e9f8a7aff8a7dfdb22\"" Sep 13 01:34:53.016144 env[1589]: time="2025-09-13T01:34:53.016110902Z" level=info msg="CreateContainer within sandbox \"2650ebe9c839df059cfb63d0d509fa3309567c09c171b6e9f8a7aff8a7dfdb22\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 01:34:53.016537 env[1589]: time="2025-09-13T01:34:53.016501301Z" level=info msg="CreateContainer within sandbox \"a1ee3711c1eeecb6a79f9bc7a33a30a7ea31f6a60a79cae78592f26025231bab\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"42825a81c983c045b38fa5a36e6364a1a3ae1175a61b578348872da1757b7d8b\"" Sep 13 01:34:53.017166 env[1589]: time="2025-09-13T01:34:53.017135859Z" level=info msg="StartContainer for \"42825a81c983c045b38fa5a36e6364a1a3ae1175a61b578348872da1757b7d8b\"" Sep 13 01:34:53.024493 kubelet[2324]: E0913 01:34:53.024428 2324 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-9d226ffbbf?timeout=10s\": dial tcp 10.200.20.24:6443: connect: connection refused" interval="1.6s" Sep 13 01:34:53.051283 kubelet[2324]: W0913 01:34:53.051145 2324 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-9d226ffbbf&limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused Sep 13 01:34:53.051283 kubelet[2324]: E0913 01:34:53.051243 2324 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-9d226ffbbf&limit=500&resourceVersion=0\": dial tcp 10.200.20.24:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:34:53.078381 env[1589]: time="2025-09-13T01:34:53.078335224Z" level=info msg="StartContainer for \"42825a81c983c045b38fa5a36e6364a1a3ae1175a61b578348872da1757b7d8b\" returns successfully" Sep 13 01:34:53.084438 env[1589]: time="2025-09-13T01:34:53.084391569Z" level=info msg="CreateContainer within sandbox \"2650ebe9c839df059cfb63d0d509fa3309567c09c171b6e9f8a7aff8a7dfdb22\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3bbe6be6ef2cdcfce045edfcc1c560983767927088f690b653b5d4efc67d82b0\"" Sep 13 01:34:53.085056 env[1589]: time="2025-09-13T01:34:53.085031727Z" level=info msg="StartContainer for \"3bbe6be6ef2cdcfce045edfcc1c560983767927088f690b653b5d4efc67d82b0\"" Sep 13 01:34:53.088806 env[1589]: time="2025-09-13T01:34:53.088771558Z" level=info msg="CreateContainer within sandbox \"07f05b3d7502bfa944281af8acab8fcd6108ba4400e2f48cbea4b6ffb2337e0a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ced77740e8c682eb5c4400cab7710718c7a082a36ede9cfb297e028a92881669\"" Sep 13 01:34:53.089426 env[1589]: time="2025-09-13T01:34:53.089402036Z" level=info msg="StartContainer for \"ced77740e8c682eb5c4400cab7710718c7a082a36ede9cfb297e028a92881669\"" Sep 13 01:34:53.205209 env[1589]: time="2025-09-13T01:34:53.204389104Z" level=info msg="StartContainer for \"ced77740e8c682eb5c4400cab7710718c7a082a36ede9cfb297e028a92881669\" returns successfully" Sep 13 01:34:53.218313 kubelet[2324]: I0913 01:34:53.217902 2324 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:34:53.218313 kubelet[2324]: E0913 01:34:53.218271 2324 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.24:6443/api/v1/nodes\": dial tcp 10.200.20.24:6443: connect: connection refused" node="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:34:53.219625 env[1589]: time="2025-09-13T01:34:53.219591065Z" level=info msg="StartContainer for \"3bbe6be6ef2cdcfce045edfcc1c560983767927088f690b653b5d4efc67d82b0\" returns successfully" Sep 13 01:34:54.820770 kubelet[2324]: I0913 01:34:54.820709 2324 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:34:55.802600 kubelet[2324]: E0913 01:34:55.802554 2324 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.8-n-9d226ffbbf\" not found" node="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:34:55.853319 kubelet[2324]: I0913 01:34:55.853289 2324 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:34:55.853705 kubelet[2324]: E0913 01:34:55.853691 2324 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-3510.3.8-n-9d226ffbbf\": node \"ci-3510.3.8-n-9d226ffbbf\" not found" Sep 13 01:34:56.514577 kubelet[2324]: I0913 01:34:56.514547 2324 apiserver.go:52] "Watching apiserver" Sep 13 01:34:56.546338 kubelet[2324]: I0913 01:34:56.546304 2324 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 01:34:57.971466 systemd[1]: Reloading. Sep 13 01:34:58.024550 /usr/lib/systemd/system-generators/torcx-generator[2613]: time="2025-09-13T01:34:58Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 01:34:58.024583 /usr/lib/systemd/system-generators/torcx-generator[2613]: time="2025-09-13T01:34:58Z" level=info msg="torcx already run" Sep 13 01:34:58.109718 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 01:34:58.109737 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 01:34:58.126891 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 01:34:58.214638 systemd[1]: Stopping kubelet.service... Sep 13 01:34:58.238021 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 01:34:58.238349 systemd[1]: Stopped kubelet.service. Sep 13 01:34:58.261836 kernel: kauditd_printk_skb: 43 callbacks suppressed Sep 13 01:34:58.261916 kernel: audit: type=1131 audit(1757727298.238:236): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:34:58.238000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:34:58.240720 systemd[1]: Starting kubelet.service... Sep 13 01:34:58.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:34:58.484630 systemd[1]: Started kubelet.service. Sep 13 01:34:58.505241 kernel: audit: type=1130 audit(1757727298.484:237): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:34:58.561042 kubelet[2688]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 01:34:58.561042 kubelet[2688]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 01:34:58.561042 kubelet[2688]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 01:34:58.561439 kubelet[2688]: I0913 01:34:58.561095 2688 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 01:34:58.570639 kubelet[2688]: I0913 01:34:58.570598 2688 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 01:34:58.570639 kubelet[2688]: I0913 01:34:58.570628 2688 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 01:34:58.570890 kubelet[2688]: I0913 01:34:58.570869 2688 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 01:34:58.572809 kubelet[2688]: I0913 01:34:58.572573 2688 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 13 01:34:58.574704 kubelet[2688]: I0913 01:34:58.574679 2688 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 01:34:58.578817 kubelet[2688]: E0913 01:34:58.578779 2688 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 01:34:58.578817 kubelet[2688]: I0913 01:34:58.578814 2688 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 01:34:58.583021 kubelet[2688]: I0913 01:34:58.582986 2688 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 01:34:58.583629 kubelet[2688]: I0913 01:34:58.583612 2688 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 01:34:58.583829 kubelet[2688]: I0913 01:34:58.583795 2688 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 01:34:58.584473 kubelet[2688]: I0913 01:34:58.583903 2688 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-9d226ffbbf","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 13 01:34:58.584582 kubelet[2688]: I0913 01:34:58.584481 2688 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 01:34:58.584582 kubelet[2688]: I0913 01:34:58.584493 2688 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 01:34:58.584582 kubelet[2688]: I0913 01:34:58.584536 2688 state_mem.go:36] "Initialized new in-memory state store" Sep 13 01:34:58.584666 kubelet[2688]: I0913 01:34:58.584625 2688 kubelet.go:408] "Attempting to sync node with API server" Sep 13 01:34:58.584666 kubelet[2688]: I0913 01:34:58.584637 2688 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 01:34:58.584666 kubelet[2688]: I0913 01:34:58.584656 2688 kubelet.go:314] "Adding apiserver pod source" Sep 13 01:34:58.584736 kubelet[2688]: I0913 01:34:58.584669 2688 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 01:34:58.589000 audit[2688]: AVC avc: denied { mac_admin } for pid=2688 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:34:58.608152 kubelet[2688]: I0913 01:34:58.586076 2688 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 01:34:58.608152 kubelet[2688]: I0913 01:34:58.586681 2688 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 01:34:58.608152 kubelet[2688]: I0913 01:34:58.587262 2688 server.go:1274] "Started kubelet" Sep 13 01:34:58.608152 kubelet[2688]: I0913 01:34:58.589656 2688 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Sep 13 01:34:58.608152 kubelet[2688]: I0913 01:34:58.589708 2688 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Sep 13 01:34:58.608152 kubelet[2688]: I0913 01:34:58.589736 2688 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 01:34:58.608152 kubelet[2688]: I0913 01:34:58.594670 2688 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 01:34:58.608152 kubelet[2688]: I0913 01:34:58.595565 2688 server.go:449] "Adding debug handlers to kubelet server" Sep 13 01:34:58.608152 kubelet[2688]: I0913 01:34:58.596477 2688 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 01:34:58.608152 kubelet[2688]: I0913 01:34:58.596673 2688 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 01:34:58.608152 kubelet[2688]: I0913 01:34:58.597196 2688 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 01:34:58.608152 kubelet[2688]: I0913 01:34:58.598368 2688 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 01:34:58.608152 kubelet[2688]: E0913 01:34:58.598563 2688 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-9d226ffbbf\" not found" Sep 13 01:34:58.608152 kubelet[2688]: I0913 01:34:58.600356 2688 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 01:34:58.608152 kubelet[2688]: I0913 01:34:58.600472 2688 reconciler.go:26] "Reconciler: start to sync state" Sep 13 01:34:58.608530 kubelet[2688]: I0913 01:34:58.602080 2688 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 01:34:58.608530 kubelet[2688]: I0913 01:34:58.602992 2688 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 01:34:58.608530 kubelet[2688]: I0913 01:34:58.603009 2688 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 01:34:58.608530 kubelet[2688]: I0913 01:34:58.603025 2688 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 01:34:58.608530 kubelet[2688]: E0913 01:34:58.603063 2688 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 01:34:58.630569 kubelet[2688]: I0913 01:34:58.610662 2688 factory.go:221] Registration of the systemd container factory successfully Sep 13 01:34:58.630569 kubelet[2688]: I0913 01:34:58.610795 2688 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 01:34:58.589000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 01:34:58.644446 kernel: audit: type=1400 audit(1757727298.589:238): avc: denied { mac_admin } for pid=2688 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:34:58.644973 kernel: audit: type=1401 audit(1757727298.589:238): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 01:34:58.589000 audit[2688]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=40006fa240 a1=4000a8b188 a2=40002eff80 a3=25 items=0 ppid=1 pid=2688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:34:58.663423 kubelet[2688]: I0913 01:34:58.663400 2688 factory.go:221] Registration of the containerd container factory successfully Sep 13 01:34:58.673592 kernel: audit: type=1300 audit(1757727298.589:238): arch=c00000b7 syscall=5 success=no exit=-22 a0=40006fa240 a1=4000a8b188 a2=40002eff80 a3=25 items=0 ppid=1 pid=2688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:34:58.674321 kernel: audit: type=1327 audit(1757727298.589:238): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 01:34:58.589000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 01:34:58.589000 audit[2688]: AVC avc: denied { mac_admin } for pid=2688 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:34:58.718039 kernel: audit: type=1400 audit(1757727298.589:239): avc: denied { mac_admin } for pid=2688 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:34:58.589000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 01:34:58.728054 kernel: audit: type=1401 audit(1757727298.589:239): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 01:34:58.728266 kubelet[2688]: E0913 01:34:58.728236 2688 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 01:34:58.589000 audit[2688]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000a8f620 a1=4000a8b1a0 a2=40006fa2d0 a3=25 items=0 ppid=1 pid=2688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:34:58.757312 kernel: audit: type=1300 audit(1757727298.589:239): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000a8f620 a1=4000a8b1a0 a2=40006fa2d0 a3=25 items=0 ppid=1 pid=2688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:34:58.589000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 01:34:58.783894 kernel: audit: type=1327 audit(1757727298.589:239): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 01:34:58.814650 kubelet[2688]: I0913 01:34:58.814628 2688 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 01:34:58.814810 kubelet[2688]: I0913 01:34:58.814797 2688 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 01:34:58.814870 kubelet[2688]: I0913 01:34:58.814862 2688 state_mem.go:36] "Initialized new in-memory state store" Sep 13 01:34:58.815070 kubelet[2688]: I0913 01:34:58.815057 2688 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 01:34:58.815161 kubelet[2688]: I0913 01:34:58.815135 2688 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 01:34:58.815231 kubelet[2688]: I0913 01:34:58.815223 2688 policy_none.go:49] "None policy: Start" Sep 13 01:34:58.815994 kubelet[2688]: I0913 01:34:58.815978 2688 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 01:34:58.816086 kubelet[2688]: I0913 01:34:58.816077 2688 state_mem.go:35] "Initializing new in-memory state store" Sep 13 01:34:58.816278 kubelet[2688]: I0913 01:34:58.816267 2688 state_mem.go:75] "Updated machine memory state" Sep 13 01:34:58.817379 kubelet[2688]: I0913 01:34:58.817361 2688 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 01:34:58.817000 audit[2688]: AVC avc: denied { mac_admin } for pid=2688 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:34:58.817000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 01:34:58.817000 audit[2688]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=400113ef60 a1=4001140270 a2=400113ef30 a3=25 items=0 ppid=1 pid=2688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:34:58.817000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 01:34:58.817738 kubelet[2688]: I0913 01:34:58.817721 2688 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Sep 13 01:34:58.817931 kubelet[2688]: I0913 01:34:58.817920 2688 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 01:34:58.818030 kubelet[2688]: I0913 01:34:58.817994 2688 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 01:34:58.820792 kubelet[2688]: I0913 01:34:58.820766 2688 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 01:34:58.926891 kubelet[2688]: I0913 01:34:58.926851 2688 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:34:58.946377 kubelet[2688]: I0913 01:34:58.946342 2688 kubelet_node_status.go:111] "Node was previously registered" node="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:34:58.946684 kubelet[2688]: I0913 01:34:58.946671 2688 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:34:58.950220 kubelet[2688]: W0913 01:34:58.950198 2688 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 01:34:58.950626 kubelet[2688]: W0913 01:34:58.950613 2688 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 01:34:58.951027 kubelet[2688]: W0913 01:34:58.951011 2688 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 01:34:59.031005 kubelet[2688]: I0913 01:34:59.030891 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5d5a23e06f0f1ccd426aac14d1f98d55-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-9d226ffbbf\" (UID: \"5d5a23e06f0f1ccd426aac14d1f98d55\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-9d226ffbbf" Sep 13 01:34:59.031005 kubelet[2688]: I0913 01:34:59.030932 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b2b0e8ba7c8ad2f53edabaa644d601b1-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-9d226ffbbf\" (UID: \"b2b0e8ba7c8ad2f53edabaa644d601b1\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-9d226ffbbf" Sep 13 01:34:59.031005 kubelet[2688]: I0913 01:34:59.030953 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cf87a2cae414fdb6a1f6926f94fa2fe9-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-9d226ffbbf\" (UID: \"cf87a2cae414fdb6a1f6926f94fa2fe9\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-9d226ffbbf" Sep 13 01:34:59.031005 kubelet[2688]: I0913 01:34:59.030971 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cf87a2cae414fdb6a1f6926f94fa2fe9-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-9d226ffbbf\" (UID: \"cf87a2cae414fdb6a1f6926f94fa2fe9\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-9d226ffbbf" Sep 13 01:34:59.031005 kubelet[2688]: I0913 01:34:59.030999 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b2b0e8ba7c8ad2f53edabaa644d601b1-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-9d226ffbbf\" (UID: \"b2b0e8ba7c8ad2f53edabaa644d601b1\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-9d226ffbbf" Sep 13 01:34:59.031296 kubelet[2688]: I0913 01:34:59.031018 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b2b0e8ba7c8ad2f53edabaa644d601b1-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-9d226ffbbf\" (UID: \"b2b0e8ba7c8ad2f53edabaa644d601b1\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-9d226ffbbf" Sep 13 01:34:59.031296 kubelet[2688]: I0913 01:34:59.031032 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cf87a2cae414fdb6a1f6926f94fa2fe9-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-9d226ffbbf\" (UID: \"cf87a2cae414fdb6a1f6926f94fa2fe9\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-9d226ffbbf" Sep 13 01:34:59.031296 kubelet[2688]: I0913 01:34:59.031048 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cf87a2cae414fdb6a1f6926f94fa2fe9-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-9d226ffbbf\" (UID: \"cf87a2cae414fdb6a1f6926f94fa2fe9\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-9d226ffbbf" Sep 13 01:34:59.031296 kubelet[2688]: I0913 01:34:59.031067 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cf87a2cae414fdb6a1f6926f94fa2fe9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-9d226ffbbf\" (UID: \"cf87a2cae414fdb6a1f6926f94fa2fe9\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-9d226ffbbf" Sep 13 01:34:59.609434 kubelet[2688]: I0913 01:34:59.609147 2688 apiserver.go:52] "Watching apiserver" Sep 13 01:34:59.700924 kubelet[2688]: I0913 01:34:59.700883 2688 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 01:34:59.813837 kubelet[2688]: I0913 01:34:59.813778 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-9d226ffbbf" podStartSLOduration=1.813763803 podStartE2EDuration="1.813763803s" podCreationTimestamp="2025-09-13 01:34:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 01:34:59.813030244 +0000 UTC m=+1.313868476" watchObservedRunningTime="2025-09-13 01:34:59.813763803 +0000 UTC m=+1.314602035" Sep 13 01:34:59.840624 kubelet[2688]: I0913 01:34:59.840567 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.8-n-9d226ffbbf" podStartSLOduration=1.840552304 podStartE2EDuration="1.840552304s" podCreationTimestamp="2025-09-13 01:34:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 01:34:59.825590937 +0000 UTC m=+1.326429169" watchObservedRunningTime="2025-09-13 01:34:59.840552304 +0000 UTC m=+1.341390536" Sep 13 01:34:59.857392 kubelet[2688]: I0913 01:34:59.857338 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.8-n-9d226ffbbf" podStartSLOduration=1.8573187880000002 podStartE2EDuration="1.857318788s" podCreationTimestamp="2025-09-13 01:34:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 01:34:59.841253823 +0000 UTC m=+1.342092055" watchObservedRunningTime="2025-09-13 01:34:59.857318788 +0000 UTC m=+1.358156980" Sep 13 01:35:03.983580 kubelet[2688]: I0913 01:35:03.983552 2688 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 01:35:03.984310 env[1589]: time="2025-09-13T01:35:03.984270216Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 01:35:03.984708 kubelet[2688]: I0913 01:35:03.984691 2688 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 01:35:05.061861 kubelet[2688]: I0913 01:35:05.061801 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/81c665f7-b389-436a-9587-a243ba705d8b-var-lib-calico\") pod \"tigera-operator-58fc44c59b-95t2n\" (UID: \"81c665f7-b389-436a-9587-a243ba705d8b\") " pod="tigera-operator/tigera-operator-58fc44c59b-95t2n" Sep 13 01:35:05.061861 kubelet[2688]: I0913 01:35:05.061868 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv9sx\" (UniqueName: \"kubernetes.io/projected/81c665f7-b389-436a-9587-a243ba705d8b-kube-api-access-kv9sx\") pod \"tigera-operator-58fc44c59b-95t2n\" (UID: \"81c665f7-b389-436a-9587-a243ba705d8b\") " pod="tigera-operator/tigera-operator-58fc44c59b-95t2n" Sep 13 01:35:05.162536 kubelet[2688]: I0913 01:35:05.162495 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/552f1233-c537-4d9e-8409-c108b0d4885f-lib-modules\") pod \"kube-proxy-2jxgx\" (UID: \"552f1233-c537-4d9e-8409-c108b0d4885f\") " pod="kube-system/kube-proxy-2jxgx" Sep 13 01:35:05.162751 kubelet[2688]: I0913 01:35:05.162734 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/552f1233-c537-4d9e-8409-c108b0d4885f-xtables-lock\") pod \"kube-proxy-2jxgx\" (UID: \"552f1233-c537-4d9e-8409-c108b0d4885f\") " pod="kube-system/kube-proxy-2jxgx" Sep 13 01:35:05.162838 kubelet[2688]: I0913 01:35:05.162823 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/552f1233-c537-4d9e-8409-c108b0d4885f-kube-proxy\") pod \"kube-proxy-2jxgx\" (UID: \"552f1233-c537-4d9e-8409-c108b0d4885f\") " pod="kube-system/kube-proxy-2jxgx" Sep 13 01:35:05.162908 kubelet[2688]: I0913 01:35:05.162895 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pf8x\" (UniqueName: \"kubernetes.io/projected/552f1233-c537-4d9e-8409-c108b0d4885f-kube-api-access-9pf8x\") pod \"kube-proxy-2jxgx\" (UID: \"552f1233-c537-4d9e-8409-c108b0d4885f\") " pod="kube-system/kube-proxy-2jxgx" Sep 13 01:35:05.170649 kubelet[2688]: I0913 01:35:05.170614 2688 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 13 01:35:05.235404 env[1589]: time="2025-09-13T01:35:05.235357984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-95t2n,Uid:81c665f7-b389-436a-9587-a243ba705d8b,Namespace:tigera-operator,Attempt:0,}" Sep 13 01:35:05.281137 env[1589]: time="2025-09-13T01:35:05.281061779Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:35:05.284158 env[1589]: time="2025-09-13T01:35:05.284123373Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:35:05.284314 env[1589]: time="2025-09-13T01:35:05.284290773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:35:05.284547 env[1589]: time="2025-09-13T01:35:05.284518412Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2c3cadd33054acc2e06548e44218d3b0025502fae5730694df24e68541be5a4b pid=2741 runtime=io.containerd.runc.v2 Sep 13 01:35:05.334225 env[1589]: time="2025-09-13T01:35:05.333799080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-95t2n,Uid:81c665f7-b389-436a-9587-a243ba705d8b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"2c3cadd33054acc2e06548e44218d3b0025502fae5730694df24e68541be5a4b\"" Sep 13 01:35:05.337453 env[1589]: time="2025-09-13T01:35:05.336233595Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 13 01:35:05.352425 env[1589]: time="2025-09-13T01:35:05.352387125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2jxgx,Uid:552f1233-c537-4d9e-8409-c108b0d4885f,Namespace:kube-system,Attempt:0,}" Sep 13 01:35:05.392938 env[1589]: time="2025-09-13T01:35:05.392862169Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:35:05.393081 env[1589]: time="2025-09-13T01:35:05.392943849Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:35:05.393081 env[1589]: time="2025-09-13T01:35:05.392969889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:35:05.393260 env[1589]: time="2025-09-13T01:35:05.393221289Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c8cab50193630dc5a9e626f071d12257fa1319efdc756777239834861d4da975 pid=2780 runtime=io.containerd.runc.v2 Sep 13 01:35:05.427823 env[1589]: time="2025-09-13T01:35:05.427775704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2jxgx,Uid:552f1233-c537-4d9e-8409-c108b0d4885f,Namespace:kube-system,Attempt:0,} returns sandbox id \"c8cab50193630dc5a9e626f071d12257fa1319efdc756777239834861d4da975\"" Sep 13 01:35:05.432383 env[1589]: time="2025-09-13T01:35:05.432344935Z" level=info msg="CreateContainer within sandbox \"c8cab50193630dc5a9e626f071d12257fa1319efdc756777239834861d4da975\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 01:35:05.472277 env[1589]: time="2025-09-13T01:35:05.472226461Z" level=info msg="CreateContainer within sandbox \"c8cab50193630dc5a9e626f071d12257fa1319efdc756777239834861d4da975\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4a01b7d9cf2f65d389e50560014736c17e200092de9ba56743dfd05ceec6e7e3\"" Sep 13 01:35:05.474289 env[1589]: time="2025-09-13T01:35:05.473099259Z" level=info msg="StartContainer for \"4a01b7d9cf2f65d389e50560014736c17e200092de9ba56743dfd05ceec6e7e3\"" Sep 13 01:35:05.524311 env[1589]: time="2025-09-13T01:35:05.524269563Z" level=info msg="StartContainer for \"4a01b7d9cf2f65d389e50560014736c17e200092de9ba56743dfd05ceec6e7e3\" returns successfully" Sep 13 01:35:05.661000 audit[2881]: NETFILTER_CFG table=mangle:43 family=2 entries=1 op=nft_register_chain pid=2881 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 01:35:05.667853 kernel: kauditd_printk_skb: 4 callbacks suppressed Sep 13 01:35:05.667975 kernel: audit: type=1325 audit(1757727305.661:241): table=mangle:43 family=2 entries=1 op=nft_register_chain pid=2881 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 01:35:05.661000 audit[2881]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffd74f9c0 a2=0 a3=1 items=0 ppid=2833 pid=2881 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:05.706139 kernel: audit: type=1300 audit(1757727305.661:241): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffd74f9c0 a2=0 a3=1 items=0 ppid=2833 pid=2881 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:05.706280 kernel: audit: type=1327 audit(1757727305.661:241): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Sep 13 01:35:05.661000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Sep 13 01:35:05.663000 audit[2882]: NETFILTER_CFG table=nat:44 family=2 entries=1 op=nft_register_chain pid=2882 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 01:35:05.732197 kernel: audit: type=1325 audit(1757727305.663:242): table=nat:44 family=2 entries=1 op=nft_register_chain pid=2882 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 01:35:05.663000 audit[2882]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc0353c10 a2=0 a3=1 items=0 ppid=2833 pid=2882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:05.758569 kernel: audit: type=1300 audit(1757727305.663:242): arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc0353c10 a2=0 a3=1 items=0 ppid=2833 pid=2882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:05.758649 kernel: audit: type=1327 audit(1757727305.663:242): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Sep 13 01:35:05.663000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Sep 13 01:35:05.664000 audit[2883]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_chain pid=2883 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 01:35:05.784956 kernel: audit: type=1325 audit(1757727305.664:243): table=filter:45 family=2 entries=1 op=nft_register_chain pid=2883 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 01:35:05.664000 audit[2883]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe3c129c0 a2=0 a3=1 items=0 ppid=2833 pid=2883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:05.810756 kernel: audit: type=1300 audit(1757727305.664:243): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe3c129c0 a2=0 a3=1 items=0 ppid=2833 pid=2883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:05.664000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Sep 13 01:35:05.825283 kernel: audit: type=1327 audit(1757727305.664:243): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Sep 13 01:35:05.665000 audit[2884]: NETFILTER_CFG table=mangle:46 family=10 entries=1 op=nft_register_chain pid=2884 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 01:35:05.839939 kernel: audit: type=1325 audit(1757727305.665:244): table=mangle:46 family=10 entries=1 op=nft_register_chain pid=2884 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 01:35:05.665000 audit[2884]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd1750a90 a2=0 a3=1 items=0 ppid=2833 pid=2884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:05.665000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Sep 13 01:35:05.666000 audit[2885]: NETFILTER_CFG table=nat:47 family=10 entries=1 op=nft_register_chain pid=2885 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 01:35:05.666000 audit[2885]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe0ef7820 a2=0 a3=1 items=0 ppid=2833 pid=2885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:05.666000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Sep 13 01:35:05.667000 audit[2886]: NETFILTER_CFG table=filter:48 family=10 entries=1 op=nft_register_chain pid=2886 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 01:35:05.667000 audit[2886]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe1564f20 a2=0 a3=1 items=0 ppid=2833 pid=2886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:05.667000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Sep 13 01:35:05.770000 audit[2887]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2887 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 01:35:05.770000 audit[2887]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=fffff4cddd70 a2=0 a3=1 items=0 ppid=2833 pid=2887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:05.770000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Sep 13 01:35:05.786000 audit[2889]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2889 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 01:35:05.786000 audit[2889]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffed77c490 a2=0 a3=1 items=0 ppid=2833 pid=2889 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:05.786000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Sep 13 01:35:05.849000 audit[2892]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2892 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 01:35:05.849000 audit[2892]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffff7e26cf0 a2=0 a3=1 items=0 ppid=2833 pid=2892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:05.849000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Sep 13 01:35:05.851000 audit[2893]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2893 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 01:35:05.851000 audit[2893]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc84ccb70 a2=0 a3=1 items=0 ppid=2833 pid=2893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:05.851000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Sep 13 01:35:05.853000 audit[2895]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2895 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 01:35:05.853000 audit[2895]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd9e97570 a2=0 a3=1 items=0 ppid=2833 pid=2895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:05.853000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Sep 13 01:35:05.854000 audit[2896]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2896 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 01:35:05.854000 audit[2896]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffadbfed0 a2=0 a3=1 items=0 ppid=2833 pid=2896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:05.854000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Sep 13 01:35:05.860000 audit[2898]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2898 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 01:35:05.860000 audit[2898]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffc8810690 a2=0 a3=1 items=0 ppid=2833 pid=2898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:05.860000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Sep 13 01:35:05.863000 audit[2901]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2901 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 01:35:05.863000 audit[2901]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffc16ff220 a2=0 a3=1 items=0 ppid=2833 pid=2901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:05.863000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Sep 13 01:35:05.865000 audit[2902]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_chain pid=2902 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 01:35:05.865000 audit[2902]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffce146e0 a2=0 a3=1 items=0 ppid=2833 pid=2902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:05.865000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Sep 13 01:35:05.867000 audit[2904]: NETFILTER_CFG table=filter:58 family=2 entries=1 op=nft_register_rule pid=2904 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 01:35:05.867000 audit[2904]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffdd647ba0 a2=0 a3=1 items=0 ppid=2833 pid=2904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:05.867000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Sep 13 01:35:05.869000 audit[2905]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_chain pid=2905 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 01:35:05.869000 audit[2905]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffec307620 a2=0 a3=1 items=0 ppid=2833 pid=2905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:05.869000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Sep 13 01:35:05.872000 audit[2907]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_rule pid=2907 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 01:35:05.872000 audit[2907]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffcd031b00 a2=0 a3=1 items=0 ppid=2833 pid=2907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:05.872000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Sep 13 01:35:05.875000 audit[2910]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=2910 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 01:35:05.875000 audit[2910]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe03bfd50 a2=0 a3=1 items=0 ppid=2833 pid=2910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:05.875000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Sep 13 01:35:05.879000 audit[2913]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=2913 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 01:35:05.879000 audit[2913]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd2376290 a2=0 a3=1 items=0 ppid=2833 pid=2913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:05.879000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Sep 13 01:35:05.880000 audit[2914]: NETFILTER_CFG table=nat:63 family=2 entries=1 op=nft_register_chain pid=2914 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 01:35:05.880000 audit[2914]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe7f73f10 a2=0 a3=1 items=0 ppid=2833 pid=2914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:05.880000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Sep 13 01:35:05.882000 audit[2916]: NETFILTER_CFG table=nat:64 family=2 entries=1 op=nft_register_rule pid=2916 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 01:35:05.882000 audit[2916]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=fffff0ae0a40 a2=0 a3=1 items=0 ppid=2833 pid=2916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:05.882000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 13 01:35:05.885000 audit[2919]: NETFILTER_CFG table=nat:65 family=2 entries=1 op=nft_register_rule pid=2919 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 01:35:05.885000 audit[2919]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd5ad1c50 a2=0 a3=1 items=0 ppid=2833 pid=2919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:05.885000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 13 01:35:05.886000 audit[2920]: NETFILTER_CFG table=nat:66 family=2 entries=1 op=nft_register_chain pid=2920 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 01:35:05.886000 audit[2920]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffffd99b00 a2=0 a3=1 items=0 ppid=2833 pid=2920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:05.886000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Sep 13 01:35:05.888000 audit[2922]: NETFILTER_CFG table=nat:67 family=2 entries=1 op=nft_register_rule pid=2922 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 01:35:05.888000 audit[2922]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=fffffed914d0 a2=0 a3=1 items=0 ppid=2833 pid=2922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:05.888000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Sep 13 01:35:06.012000 audit[2928]: NETFILTER_CFG table=filter:68 family=2 entries=8 op=nft_register_rule pid=2928 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:35:06.012000 audit[2928]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffed7643b0 a2=0 a3=1 items=0 ppid=2833 pid=2928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:06.012000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:35:06.054000 audit[2928]: NETFILTER_CFG table=nat:69 family=2 entries=14 op=nft_register_chain pid=2928 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:35:06.054000 audit[2928]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=ffffed7643b0 a2=0 a3=1 items=0 ppid=2833 pid=2928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:06.054000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:35:06.056000 audit[2933]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2933 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 01:35:06.056000 audit[2933]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffe14fb760 a2=0 a3=1 items=0 ppid=2833 pid=2933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:06.056000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Sep 13 01:35:06.058000 audit[2935]: NETFILTER_CFG table=filter:71 family=10 entries=2 op=nft_register_chain pid=2935 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 01:35:06.058000 audit[2935]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffc14b9110 a2=0 a3=1 items=0 ppid=2833 pid=2935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:06.058000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Sep 13 01:35:06.061000 audit[2938]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2938 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 01:35:06.061000 audit[2938]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffcfbfacc0 a2=0 a3=1 items=0 ppid=2833 pid=2938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:06.061000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Sep 13 01:35:06.063000 audit[2939]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2939 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 01:35:06.063000 audit[2939]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd2ae57f0 a2=0 a3=1 items=0 ppid=2833 pid=2939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:06.063000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Sep 13 01:35:06.065000 audit[2941]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2941 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 01:35:06.065000 audit[2941]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc0ac67f0 a2=0 a3=1 items=0 ppid=2833 pid=2941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:06.065000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Sep 13 01:35:06.066000 audit[2942]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2942 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 01:35:06.066000 audit[2942]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff4c201b0 a2=0 a3=1 items=0 ppid=2833 pid=2942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:06.066000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Sep 13 01:35:06.069000 audit[2944]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2944 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 01:35:06.069000 audit[2944]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffd3b3ec40 a2=0 a3=1 items=0 ppid=2833 pid=2944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:06.069000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Sep 13 01:35:06.072000 audit[2947]: NETFILTER_CFG table=filter:77 family=10 entries=2 op=nft_register_chain pid=2947 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 01:35:06.072000 audit[2947]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffc5996600 a2=0 a3=1 items=0 ppid=2833 pid=2947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:06.072000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Sep 13 01:35:06.074000 audit[2948]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_chain pid=2948 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 01:35:06.074000 audit[2948]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe6481270 a2=0 a3=1 items=0 ppid=2833 pid=2948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:06.074000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Sep 13 01:35:06.076000 audit[2950]: NETFILTER_CFG table=filter:79 family=10 entries=1 op=nft_register_rule pid=2950 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 01:35:06.076000 audit[2950]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc3ec71d0 a2=0 a3=1 items=0 ppid=2833 pid=2950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:06.076000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Sep 13 01:35:06.078000 audit[2951]: NETFILTER_CFG table=filter:80 family=10 entries=1 op=nft_register_chain pid=2951 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 01:35:06.078000 audit[2951]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffece0a2e0 a2=0 a3=1 items=0 ppid=2833 pid=2951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:06.078000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Sep 13 01:35:06.080000 audit[2953]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_rule pid=2953 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 01:35:06.080000 audit[2953]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffceaffc80 a2=0 a3=1 items=0 ppid=2833 pid=2953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:06.080000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Sep 13 01:35:06.083000 audit[2956]: NETFILTER_CFG table=filter:82 family=10 entries=1 op=nft_register_rule pid=2956 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 01:35:06.083000 audit[2956]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc57020e0 a2=0 a3=1 items=0 ppid=2833 pid=2956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:06.083000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Sep 13 01:35:06.087000 audit[2959]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=2959 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 01:35:06.087000 audit[2959]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff4d19170 a2=0 a3=1 items=0 ppid=2833 pid=2959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:06.087000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Sep 13 01:35:06.088000 audit[2960]: NETFILTER_CFG table=nat:84 family=10 entries=1 op=nft_register_chain pid=2960 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 01:35:06.088000 audit[2960]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff2650920 a2=0 a3=1 items=0 ppid=2833 pid=2960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:06.088000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Sep 13 01:35:06.091000 audit[2962]: NETFILTER_CFG table=nat:85 family=10 entries=2 op=nft_register_chain pid=2962 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 01:35:06.091000 audit[2962]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffddc97ad0 a2=0 a3=1 items=0 ppid=2833 pid=2962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:06.091000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 13 01:35:06.094000 audit[2965]: NETFILTER_CFG table=nat:86 family=10 entries=2 op=nft_register_chain pid=2965 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 01:35:06.094000 audit[2965]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=ffffd1994680 a2=0 a3=1 items=0 ppid=2833 pid=2965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:06.094000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 13 01:35:06.095000 audit[2966]: NETFILTER_CFG table=nat:87 family=10 entries=1 op=nft_register_chain pid=2966 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 01:35:06.095000 audit[2966]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffef314ca0 a2=0 a3=1 items=0 ppid=2833 pid=2966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:06.095000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Sep 13 01:35:06.097000 audit[2968]: NETFILTER_CFG table=nat:88 family=10 entries=2 op=nft_register_chain pid=2968 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 01:35:06.097000 audit[2968]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=fffff33a7810 a2=0 a3=1 items=0 ppid=2833 pid=2968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:06.097000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Sep 13 01:35:06.099000 audit[2969]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=2969 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 01:35:06.099000 audit[2969]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffa934120 a2=0 a3=1 items=0 ppid=2833 pid=2969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:06.099000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Sep 13 01:35:06.102000 audit[2971]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=2971 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 01:35:06.102000 audit[2971]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffd282ad40 a2=0 a3=1 items=0 ppid=2833 pid=2971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:06.102000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 13 01:35:06.105000 audit[2974]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_rule pid=2974 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 01:35:06.105000 audit[2974]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffe2af8790 a2=0 a3=1 items=0 ppid=2833 pid=2974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:06.105000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 13 01:35:06.107000 audit[2976]: NETFILTER_CFG table=filter:92 family=10 entries=3 op=nft_register_rule pid=2976 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Sep 13 01:35:06.107000 audit[2976]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=ffffd0bee2e0 a2=0 a3=1 items=0 ppid=2833 pid=2976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:06.107000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:35:06.108000 audit[2976]: NETFILTER_CFG table=nat:93 family=10 entries=7 op=nft_register_chain pid=2976 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Sep 13 01:35:06.108000 audit[2976]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffd0bee2e0 a2=0 a3=1 items=0 ppid=2833 pid=2976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:06.108000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:35:06.944603 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3627377086.mount: Deactivated successfully. Sep 13 01:35:07.602374 env[1589]: time="2025-09-13T01:35:07.602331609Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.38.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:35:07.612753 env[1589]: time="2025-09-13T01:35:07.612702470Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:35:07.618020 env[1589]: time="2025-09-13T01:35:07.617987981Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.38.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:35:07.627657 env[1589]: time="2025-09-13T01:35:07.627614003Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:35:07.628314 env[1589]: time="2025-09-13T01:35:07.628285082Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 13 01:35:07.631589 env[1589]: time="2025-09-13T01:35:07.631118517Z" level=info msg="CreateContainer within sandbox \"2c3cadd33054acc2e06548e44218d3b0025502fae5730694df24e68541be5a4b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 13 01:35:07.673264 env[1589]: time="2025-09-13T01:35:07.673158402Z" level=info msg="CreateContainer within sandbox \"2c3cadd33054acc2e06548e44218d3b0025502fae5730694df24e68541be5a4b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"2131c1344897c4ce69848cda604b748e5a68c400af940c498f0db125369b5fd6\"" Sep 13 01:35:07.674551 env[1589]: time="2025-09-13T01:35:07.674422160Z" level=info msg="StartContainer for \"2131c1344897c4ce69848cda604b748e5a68c400af940c498f0db125369b5fd6\"" Sep 13 01:35:07.725978 env[1589]: time="2025-09-13T01:35:07.725911428Z" level=info msg="StartContainer for \"2131c1344897c4ce69848cda604b748e5a68c400af940c498f0db125369b5fd6\" returns successfully" Sep 13 01:35:07.864427 kubelet[2688]: I0913 01:35:07.864301 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2jxgx" podStartSLOduration=2.8642827410000002 podStartE2EDuration="2.864282741s" podCreationTimestamp="2025-09-13 01:35:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 01:35:05.85708922 +0000 UTC m=+7.357927452" watchObservedRunningTime="2025-09-13 01:35:07.864282741 +0000 UTC m=+9.365120973" Sep 13 01:35:07.880811 kubelet[2688]: I0913 01:35:07.880751 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-95t2n" podStartSLOduration=1.5866710689999999 podStartE2EDuration="3.880735632s" podCreationTimestamp="2025-09-13 01:35:04 +0000 UTC" firstStartedPulling="2025-09-13 01:35:05.335598277 +0000 UTC m=+6.836436469" lastFinishedPulling="2025-09-13 01:35:07.6296628 +0000 UTC m=+9.130501032" observedRunningTime="2025-09-13 01:35:07.865302179 +0000 UTC m=+9.366140411" watchObservedRunningTime="2025-09-13 01:35:07.880735632 +0000 UTC m=+9.381573824" Sep 13 01:35:13.718028 sudo[2037]: pam_unix(sudo:session): session closed for user root Sep 13 01:35:13.744218 kernel: kauditd_printk_skb: 143 callbacks suppressed Sep 13 01:35:13.744341 kernel: audit: type=1106 audit(1757727313.717:292): pid=2037 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 01:35:13.717000 audit[2037]: USER_END pid=2037 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 01:35:13.717000 audit[2037]: CRED_DISP pid=2037 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 01:35:13.766510 kernel: audit: type=1104 audit(1757727313.717:293): pid=2037 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 01:35:13.788865 sshd[1969]: pam_unix(sshd:session): session closed for user core Sep 13 01:35:13.790000 audit[1969]: USER_END pid=1969 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:35:13.818875 systemd-logind[1569]: Session 9 logged out. Waiting for processes to exit. Sep 13 01:35:13.819080 systemd[1]: sshd@6-10.200.20.24:22-10.200.16.10:37314.service: Deactivated successfully. Sep 13 01:35:13.819891 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 01:35:13.821054 systemd-logind[1569]: Removed session 9. Sep 13 01:35:13.790000 audit[1969]: CRED_DISP pid=1969 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:35:13.858430 kernel: audit: type=1106 audit(1757727313.790:294): pid=1969 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:35:13.858543 kernel: audit: type=1104 audit(1757727313.790:295): pid=1969 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:35:13.819000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.24:22-10.200.16.10:37314 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:35:13.884314 kernel: audit: type=1131 audit(1757727313.819:296): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.24:22-10.200.16.10:37314 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:35:15.885000 audit[3055]: NETFILTER_CFG table=filter:94 family=2 entries=15 op=nft_register_rule pid=3055 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:35:15.885000 audit[3055]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffe073be70 a2=0 a3=1 items=0 ppid=2833 pid=3055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:15.936030 kernel: audit: type=1325 audit(1757727315.885:297): table=filter:94 family=2 entries=15 op=nft_register_rule pid=3055 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:35:15.936116 kernel: audit: type=1300 audit(1757727315.885:297): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffe073be70 a2=0 a3=1 items=0 ppid=2833 pid=3055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:15.885000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:35:15.951458 kernel: audit: type=1327 audit(1757727315.885:297): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:35:15.903000 audit[3055]: NETFILTER_CFG table=nat:95 family=2 entries=12 op=nft_register_rule pid=3055 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:35:15.970991 kernel: audit: type=1325 audit(1757727315.903:298): table=nat:95 family=2 entries=12 op=nft_register_rule pid=3055 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:35:15.903000 audit[3055]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe073be70 a2=0 a3=1 items=0 ppid=2833 pid=3055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:16.002741 kernel: audit: type=1300 audit(1757727315.903:298): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe073be70 a2=0 a3=1 items=0 ppid=2833 pid=3055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:15.903000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:35:15.972000 audit[3057]: NETFILTER_CFG table=filter:96 family=2 entries=16 op=nft_register_rule pid=3057 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:35:15.972000 audit[3057]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffdb106330 a2=0 a3=1 items=0 ppid=2833 pid=3057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:15.972000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:35:16.005000 audit[3057]: NETFILTER_CFG table=nat:97 family=2 entries=12 op=nft_register_rule pid=3057 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:35:16.005000 audit[3057]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffdb106330 a2=0 a3=1 items=0 ppid=2833 pid=3057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:16.005000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:35:19.100225 kernel: kauditd_printk_skb: 7 callbacks suppressed Sep 13 01:35:19.100393 kernel: audit: type=1325 audit(1757727319.081:301): table=filter:98 family=2 entries=17 op=nft_register_rule pid=3059 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:35:19.081000 audit[3059]: NETFILTER_CFG table=filter:98 family=2 entries=17 op=nft_register_rule pid=3059 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:35:19.081000 audit[3059]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffe7ddc5b0 a2=0 a3=1 items=0 ppid=2833 pid=3059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:19.081000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:35:19.160851 kernel: audit: type=1300 audit(1757727319.081:301): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffe7ddc5b0 a2=0 a3=1 items=0 ppid=2833 pid=3059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:19.160955 kernel: audit: type=1327 audit(1757727319.081:301): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:35:19.108000 audit[3059]: NETFILTER_CFG table=nat:99 family=2 entries=12 op=nft_register_rule pid=3059 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:35:19.183331 kernel: audit: type=1325 audit(1757727319.108:302): table=nat:99 family=2 entries=12 op=nft_register_rule pid=3059 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:35:19.108000 audit[3059]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe7ddc5b0 a2=0 a3=1 items=0 ppid=2833 pid=3059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:19.203508 kubelet[2688]: W0913 01:35:19.203477 2688 reflector.go:561] object-"calico-system"/"typha-certs": failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:ci-3510.3.8-n-9d226ffbbf" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-3510.3.8-n-9d226ffbbf' and this object Sep 13 01:35:19.203971 kubelet[2688]: E0913 01:35:19.203946 2688 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"typha-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"typha-certs\" is forbidden: User \"system:node:ci-3510.3.8-n-9d226ffbbf\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-3510.3.8-n-9d226ffbbf' and this object" logger="UnhandledError" Sep 13 01:35:19.204235 kubelet[2688]: W0913 01:35:19.204218 2688 reflector.go:561] object-"calico-system"/"tigera-ca-bundle": failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:ci-3510.3.8-n-9d226ffbbf" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-3510.3.8-n-9d226ffbbf' and this object Sep 13 01:35:19.204370 kubelet[2688]: E0913 01:35:19.204351 2688 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"tigera-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"tigera-ca-bundle\" is forbidden: User \"system:node:ci-3510.3.8-n-9d226ffbbf\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-3510.3.8-n-9d226ffbbf' and this object" logger="UnhandledError" Sep 13 01:35:19.204613 kubelet[2688]: W0913 01:35:19.204596 2688 reflector.go:561] object-"calico-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510.3.8-n-9d226ffbbf" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-3510.3.8-n-9d226ffbbf' and this object Sep 13 01:35:19.204729 kubelet[2688]: E0913 01:35:19.204712 2688 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-3510.3.8-n-9d226ffbbf\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-3510.3.8-n-9d226ffbbf' and this object" logger="UnhandledError" Sep 13 01:35:19.224290 kernel: audit: type=1300 audit(1757727319.108:302): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe7ddc5b0 a2=0 a3=1 items=0 ppid=2833 pid=3059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:19.108000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:35:19.243816 kernel: audit: type=1327 audit(1757727319.108:302): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:35:19.195000 audit[3062]: NETFILTER_CFG table=filter:100 family=2 entries=18 op=nft_register_rule pid=3062 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:35:19.268177 kernel: audit: type=1325 audit(1757727319.195:303): table=filter:100 family=2 entries=18 op=nft_register_rule pid=3062 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:35:19.195000 audit[3062]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffcc23e3f0 a2=0 a3=1 items=0 ppid=2833 pid=3062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:19.299911 kernel: audit: type=1300 audit(1757727319.195:303): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffcc23e3f0 a2=0 a3=1 items=0 ppid=2833 pid=3062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:19.301941 kubelet[2688]: I0913 01:35:19.301890 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c8ab58f9-2b9f-4500-acee-1efce0ad2db2-tigera-ca-bundle\") pod \"calico-typha-7f5d6b5b94-z8zh4\" (UID: \"c8ab58f9-2b9f-4500-acee-1efce0ad2db2\") " pod="calico-system/calico-typha-7f5d6b5b94-z8zh4" Sep 13 01:35:19.302140 kubelet[2688]: I0913 01:35:19.302125 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ztb8\" (UniqueName: \"kubernetes.io/projected/c8ab58f9-2b9f-4500-acee-1efce0ad2db2-kube-api-access-2ztb8\") pod \"calico-typha-7f5d6b5b94-z8zh4\" (UID: \"c8ab58f9-2b9f-4500-acee-1efce0ad2db2\") " pod="calico-system/calico-typha-7f5d6b5b94-z8zh4" Sep 13 01:35:19.302260 kubelet[2688]: I0913 01:35:19.302245 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c8ab58f9-2b9f-4500-acee-1efce0ad2db2-typha-certs\") pod \"calico-typha-7f5d6b5b94-z8zh4\" (UID: \"c8ab58f9-2b9f-4500-acee-1efce0ad2db2\") " pod="calico-system/calico-typha-7f5d6b5b94-z8zh4" Sep 13 01:35:19.195000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:35:19.317984 kernel: audit: type=1327 audit(1757727319.195:303): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:35:19.252000 audit[3062]: NETFILTER_CFG table=nat:101 family=2 entries=12 op=nft_register_rule pid=3062 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:35:19.342420 kernel: audit: type=1325 audit(1757727319.252:304): table=nat:101 family=2 entries=12 op=nft_register_rule pid=3062 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:35:19.252000 audit[3062]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffcc23e3f0 a2=0 a3=1 items=0 ppid=2833 pid=3062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:19.252000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:35:19.605132 kubelet[2688]: I0913 01:35:19.605092 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/21c33e44-175f-4bf4-be30-df4cfb08d7d7-policysync\") pod \"calico-node-cc8fs\" (UID: \"21c33e44-175f-4bf4-be30-df4cfb08d7d7\") " pod="calico-system/calico-node-cc8fs" Sep 13 01:35:19.605350 kubelet[2688]: I0913 01:35:19.605336 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/21c33e44-175f-4bf4-be30-df4cfb08d7d7-var-run-calico\") pod \"calico-node-cc8fs\" (UID: \"21c33e44-175f-4bf4-be30-df4cfb08d7d7\") " pod="calico-system/calico-node-cc8fs" Sep 13 01:35:19.605432 kubelet[2688]: I0913 01:35:19.605416 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/21c33e44-175f-4bf4-be30-df4cfb08d7d7-node-certs\") pod \"calico-node-cc8fs\" (UID: \"21c33e44-175f-4bf4-be30-df4cfb08d7d7\") " pod="calico-system/calico-node-cc8fs" Sep 13 01:35:19.605520 kubelet[2688]: I0913 01:35:19.605506 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/21c33e44-175f-4bf4-be30-df4cfb08d7d7-flexvol-driver-host\") pod \"calico-node-cc8fs\" (UID: \"21c33e44-175f-4bf4-be30-df4cfb08d7d7\") " pod="calico-system/calico-node-cc8fs" Sep 13 01:35:19.605593 kubelet[2688]: I0913 01:35:19.605580 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/21c33e44-175f-4bf4-be30-df4cfb08d7d7-var-lib-calico\") pod \"calico-node-cc8fs\" (UID: \"21c33e44-175f-4bf4-be30-df4cfb08d7d7\") " pod="calico-system/calico-node-cc8fs" Sep 13 01:35:19.605665 kubelet[2688]: I0913 01:35:19.605653 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/21c33e44-175f-4bf4-be30-df4cfb08d7d7-xtables-lock\") pod \"calico-node-cc8fs\" (UID: \"21c33e44-175f-4bf4-be30-df4cfb08d7d7\") " pod="calico-system/calico-node-cc8fs" Sep 13 01:35:19.605741 kubelet[2688]: I0913 01:35:19.605727 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d82p9\" (UniqueName: \"kubernetes.io/projected/21c33e44-175f-4bf4-be30-df4cfb08d7d7-kube-api-access-d82p9\") pod \"calico-node-cc8fs\" (UID: \"21c33e44-175f-4bf4-be30-df4cfb08d7d7\") " pod="calico-system/calico-node-cc8fs" Sep 13 01:35:19.605822 kubelet[2688]: I0913 01:35:19.605809 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/21c33e44-175f-4bf4-be30-df4cfb08d7d7-cni-bin-dir\") pod \"calico-node-cc8fs\" (UID: \"21c33e44-175f-4bf4-be30-df4cfb08d7d7\") " pod="calico-system/calico-node-cc8fs" Sep 13 01:35:19.605901 kubelet[2688]: I0913 01:35:19.605889 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/21c33e44-175f-4bf4-be30-df4cfb08d7d7-cni-net-dir\") pod \"calico-node-cc8fs\" (UID: \"21c33e44-175f-4bf4-be30-df4cfb08d7d7\") " pod="calico-system/calico-node-cc8fs" Sep 13 01:35:19.605975 kubelet[2688]: I0913 01:35:19.605962 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/21c33e44-175f-4bf4-be30-df4cfb08d7d7-cni-log-dir\") pod \"calico-node-cc8fs\" (UID: \"21c33e44-175f-4bf4-be30-df4cfb08d7d7\") " pod="calico-system/calico-node-cc8fs" Sep 13 01:35:19.606044 kubelet[2688]: I0913 01:35:19.606032 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/21c33e44-175f-4bf4-be30-df4cfb08d7d7-lib-modules\") pod \"calico-node-cc8fs\" (UID: \"21c33e44-175f-4bf4-be30-df4cfb08d7d7\") " pod="calico-system/calico-node-cc8fs" Sep 13 01:35:19.606108 kubelet[2688]: I0913 01:35:19.606097 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21c33e44-175f-4bf4-be30-df4cfb08d7d7-tigera-ca-bundle\") pod \"calico-node-cc8fs\" (UID: \"21c33e44-175f-4bf4-be30-df4cfb08d7d7\") " pod="calico-system/calico-node-cc8fs" Sep 13 01:35:19.685384 kubelet[2688]: E0913 01:35:19.685332 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vpflq" podUID="77870db6-b52e-4395-a518-9c1b7d66eb0e" Sep 13 01:35:19.708116 kubelet[2688]: E0913 01:35:19.708090 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.708303 kubelet[2688]: W0913 01:35:19.708287 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.708397 kubelet[2688]: E0913 01:35:19.708372 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.708645 kubelet[2688]: E0913 01:35:19.708633 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.708744 kubelet[2688]: W0913 01:35:19.708731 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.708818 kubelet[2688]: E0913 01:35:19.708807 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.709051 kubelet[2688]: E0913 01:35:19.709039 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.709133 kubelet[2688]: W0913 01:35:19.709122 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.709207 kubelet[2688]: E0913 01:35:19.709195 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.709451 kubelet[2688]: E0913 01:35:19.709420 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.709451 kubelet[2688]: W0913 01:35:19.709447 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.709534 kubelet[2688]: E0913 01:35:19.709466 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.709814 kubelet[2688]: E0913 01:35:19.709671 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.709814 kubelet[2688]: W0913 01:35:19.709686 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.709814 kubelet[2688]: E0913 01:35:19.709702 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.713843 kubelet[2688]: E0913 01:35:19.713818 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.713843 kubelet[2688]: W0913 01:35:19.713836 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.713961 kubelet[2688]: E0913 01:35:19.713865 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.714226 kubelet[2688]: E0913 01:35:19.714127 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.714226 kubelet[2688]: W0913 01:35:19.714142 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.714331 kubelet[2688]: E0913 01:35:19.714228 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.714732 kubelet[2688]: E0913 01:35:19.714479 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.714732 kubelet[2688]: W0913 01:35:19.714492 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.714732 kubelet[2688]: E0913 01:35:19.714568 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.721211 kubelet[2688]: E0913 01:35:19.718775 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.721211 kubelet[2688]: W0913 01:35:19.718798 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.721211 kubelet[2688]: E0913 01:35:19.718992 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.721211 kubelet[2688]: W0913 01:35:19.719001 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.721524 kubelet[2688]: E0913 01:35:19.721504 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.722225 kubelet[2688]: E0913 01:35:19.722004 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.722648 kubelet[2688]: E0913 01:35:19.722264 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.722740 kubelet[2688]: W0913 01:35:19.722723 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.728766 kubelet[2688]: E0913 01:35:19.728733 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.734928 kubelet[2688]: E0913 01:35:19.734903 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.735062 kubelet[2688]: W0913 01:35:19.735045 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.735592 kubelet[2688]: E0913 01:35:19.735566 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.748208 kubelet[2688]: E0913 01:35:19.747311 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.748208 kubelet[2688]: W0913 01:35:19.747335 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.748208 kubelet[2688]: E0913 01:35:19.747460 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.748208 kubelet[2688]: E0913 01:35:19.747636 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.748208 kubelet[2688]: W0913 01:35:19.747645 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.748208 kubelet[2688]: E0913 01:35:19.747706 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.748208 kubelet[2688]: E0913 01:35:19.747788 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.748208 kubelet[2688]: W0913 01:35:19.747795 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.748208 kubelet[2688]: E0913 01:35:19.747847 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.748208 kubelet[2688]: E0913 01:35:19.747920 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.748540 kubelet[2688]: W0913 01:35:19.747926 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.748540 kubelet[2688]: E0913 01:35:19.748007 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.748540 kubelet[2688]: E0913 01:35:19.748105 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.748540 kubelet[2688]: W0913 01:35:19.748112 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.748540 kubelet[2688]: E0913 01:35:19.748163 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.748540 kubelet[2688]: E0913 01:35:19.748305 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.748540 kubelet[2688]: W0913 01:35:19.748313 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.748540 kubelet[2688]: E0913 01:35:19.748325 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.748540 kubelet[2688]: E0913 01:35:19.748507 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.748540 kubelet[2688]: W0913 01:35:19.748515 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.748773 kubelet[2688]: E0913 01:35:19.748539 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.748773 kubelet[2688]: E0913 01:35:19.748681 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.748773 kubelet[2688]: W0913 01:35:19.748688 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.748773 kubelet[2688]: E0913 01:35:19.748698 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.749208 kubelet[2688]: E0913 01:35:19.748886 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.749208 kubelet[2688]: W0913 01:35:19.748902 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.749208 kubelet[2688]: E0913 01:35:19.748911 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.749352 kubelet[2688]: E0913 01:35:19.749284 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.749352 kubelet[2688]: W0913 01:35:19.749295 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.749352 kubelet[2688]: E0913 01:35:19.749315 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.750219 kubelet[2688]: E0913 01:35:19.749462 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.750219 kubelet[2688]: W0913 01:35:19.749474 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.750219 kubelet[2688]: E0913 01:35:19.749482 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.750219 kubelet[2688]: E0913 01:35:19.749592 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.750219 kubelet[2688]: W0913 01:35:19.749601 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.750219 kubelet[2688]: E0913 01:35:19.749608 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.750219 kubelet[2688]: E0913 01:35:19.749720 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.750219 kubelet[2688]: W0913 01:35:19.749725 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.750219 kubelet[2688]: E0913 01:35:19.749732 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.750219 kubelet[2688]: E0913 01:35:19.750140 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.750496 kubelet[2688]: W0913 01:35:19.750151 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.750496 kubelet[2688]: E0913 01:35:19.750162 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.750496 kubelet[2688]: E0913 01:35:19.750362 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.750496 kubelet[2688]: W0913 01:35:19.750371 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.750496 kubelet[2688]: E0913 01:35:19.750380 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.750599 kubelet[2688]: E0913 01:35:19.750497 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.750599 kubelet[2688]: W0913 01:35:19.750504 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.750599 kubelet[2688]: E0913 01:35:19.750511 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.750669 kubelet[2688]: E0913 01:35:19.750617 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.750669 kubelet[2688]: W0913 01:35:19.750623 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.750669 kubelet[2688]: E0913 01:35:19.750630 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.750764 kubelet[2688]: E0913 01:35:19.750743 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.750764 kubelet[2688]: W0913 01:35:19.750757 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.750828 kubelet[2688]: E0913 01:35:19.750765 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.751205 kubelet[2688]: E0913 01:35:19.750869 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.751205 kubelet[2688]: W0913 01:35:19.750880 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.751205 kubelet[2688]: E0913 01:35:19.750887 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.751205 kubelet[2688]: E0913 01:35:19.750991 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.751205 kubelet[2688]: W0913 01:35:19.750998 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.751205 kubelet[2688]: E0913 01:35:19.751006 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.756361 kubelet[2688]: E0913 01:35:19.756336 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.756361 kubelet[2688]: W0913 01:35:19.756354 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.756494 kubelet[2688]: E0913 01:35:19.756369 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.756543 kubelet[2688]: E0913 01:35:19.756524 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.756543 kubelet[2688]: W0913 01:35:19.756537 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.756620 kubelet[2688]: E0913 01:35:19.756546 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.756674 kubelet[2688]: E0913 01:35:19.756659 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.756674 kubelet[2688]: W0913 01:35:19.756670 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.756744 kubelet[2688]: E0913 01:35:19.756680 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.756805 kubelet[2688]: E0913 01:35:19.756789 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.756805 kubelet[2688]: W0913 01:35:19.756800 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.756881 kubelet[2688]: E0913 01:35:19.756807 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.756934 kubelet[2688]: E0913 01:35:19.756918 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.756934 kubelet[2688]: W0913 01:35:19.756929 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.757004 kubelet[2688]: E0913 01:35:19.756937 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.757076 kubelet[2688]: E0913 01:35:19.757063 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.757076 kubelet[2688]: W0913 01:35:19.757074 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.757147 kubelet[2688]: E0913 01:35:19.757082 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.757217 kubelet[2688]: E0913 01:35:19.757203 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.757217 kubelet[2688]: W0913 01:35:19.757214 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.757291 kubelet[2688]: E0913 01:35:19.757222 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.757356 kubelet[2688]: E0913 01:35:19.757339 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.757356 kubelet[2688]: W0913 01:35:19.757351 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.757428 kubelet[2688]: E0913 01:35:19.757358 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.757535 kubelet[2688]: E0913 01:35:19.757470 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.757535 kubelet[2688]: W0913 01:35:19.757481 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.757535 kubelet[2688]: E0913 01:35:19.757491 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.809274 kubelet[2688]: E0913 01:35:19.809248 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.809439 kubelet[2688]: W0913 01:35:19.809423 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.809517 kubelet[2688]: E0913 01:35:19.809504 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.809619 kubelet[2688]: I0913 01:35:19.809603 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/77870db6-b52e-4395-a518-9c1b7d66eb0e-registration-dir\") pod \"csi-node-driver-vpflq\" (UID: \"77870db6-b52e-4395-a518-9c1b7d66eb0e\") " pod="calico-system/csi-node-driver-vpflq" Sep 13 01:35:19.809899 kubelet[2688]: E0913 01:35:19.809883 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.809996 kubelet[2688]: W0913 01:35:19.809983 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.810073 kubelet[2688]: E0913 01:35:19.810062 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.810940 kubelet[2688]: E0913 01:35:19.810916 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.810940 kubelet[2688]: W0913 01:35:19.810935 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.811067 kubelet[2688]: E0913 01:35:19.810957 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.811153 kubelet[2688]: E0913 01:35:19.811140 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.811153 kubelet[2688]: W0913 01:35:19.811153 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.811252 kubelet[2688]: E0913 01:35:19.811167 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.811359 kubelet[2688]: E0913 01:35:19.811345 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.811359 kubelet[2688]: W0913 01:35:19.811358 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.811415 kubelet[2688]: E0913 01:35:19.811367 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.811529 kubelet[2688]: E0913 01:35:19.811516 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.811529 kubelet[2688]: W0913 01:35:19.811528 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.811596 kubelet[2688]: E0913 01:35:19.811536 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.811684 kubelet[2688]: E0913 01:35:19.811672 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.811684 kubelet[2688]: W0913 01:35:19.811683 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.811749 kubelet[2688]: E0913 01:35:19.811690 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.811749 kubelet[2688]: I0913 01:35:19.811713 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/77870db6-b52e-4395-a518-9c1b7d66eb0e-kubelet-dir\") pod \"csi-node-driver-vpflq\" (UID: \"77870db6-b52e-4395-a518-9c1b7d66eb0e\") " pod="calico-system/csi-node-driver-vpflq" Sep 13 01:35:19.811881 kubelet[2688]: E0913 01:35:19.811859 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.811881 kubelet[2688]: W0913 01:35:19.811874 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.811980 kubelet[2688]: E0913 01:35:19.811883 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.811980 kubelet[2688]: I0913 01:35:19.811897 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wr7jn\" (UniqueName: \"kubernetes.io/projected/77870db6-b52e-4395-a518-9c1b7d66eb0e-kube-api-access-wr7jn\") pod \"csi-node-driver-vpflq\" (UID: \"77870db6-b52e-4395-a518-9c1b7d66eb0e\") " pod="calico-system/csi-node-driver-vpflq" Sep 13 01:35:19.812049 kubelet[2688]: E0913 01:35:19.812029 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.812049 kubelet[2688]: W0913 01:35:19.812043 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.812110 kubelet[2688]: E0913 01:35:19.812052 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.812211 kubelet[2688]: E0913 01:35:19.812197 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.812211 kubelet[2688]: W0913 01:35:19.812209 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.812276 kubelet[2688]: E0913 01:35:19.812218 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.812276 kubelet[2688]: I0913 01:35:19.812233 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/77870db6-b52e-4395-a518-9c1b7d66eb0e-varrun\") pod \"csi-node-driver-vpflq\" (UID: \"77870db6-b52e-4395-a518-9c1b7d66eb0e\") " pod="calico-system/csi-node-driver-vpflq" Sep 13 01:35:19.812382 kubelet[2688]: E0913 01:35:19.812369 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.812382 kubelet[2688]: W0913 01:35:19.812381 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.812458 kubelet[2688]: E0913 01:35:19.812395 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.812539 kubelet[2688]: E0913 01:35:19.812523 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.812539 kubelet[2688]: W0913 01:35:19.812536 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.812604 kubelet[2688]: E0913 01:35:19.812544 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.812604 kubelet[2688]: I0913 01:35:19.812558 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/77870db6-b52e-4395-a518-9c1b7d66eb0e-socket-dir\") pod \"csi-node-driver-vpflq\" (UID: \"77870db6-b52e-4395-a518-9c1b7d66eb0e\") " pod="calico-system/csi-node-driver-vpflq" Sep 13 01:35:19.812728 kubelet[2688]: E0913 01:35:19.812713 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.812728 kubelet[2688]: W0913 01:35:19.812726 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.812816 kubelet[2688]: E0913 01:35:19.812740 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.812873 kubelet[2688]: E0913 01:35:19.812858 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.812873 kubelet[2688]: W0913 01:35:19.812870 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.812959 kubelet[2688]: E0913 01:35:19.812883 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.813020 kubelet[2688]: E0913 01:35:19.813004 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.813020 kubelet[2688]: W0913 01:35:19.813016 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.813101 kubelet[2688]: E0913 01:35:19.813031 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.813171 kubelet[2688]: E0913 01:35:19.813155 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.813171 kubelet[2688]: W0913 01:35:19.813167 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.813280 kubelet[2688]: E0913 01:35:19.813226 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.813397 kubelet[2688]: E0913 01:35:19.813370 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.813397 kubelet[2688]: W0913 01:35:19.813381 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.813397 kubelet[2688]: E0913 01:35:19.813395 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.813531 kubelet[2688]: E0913 01:35:19.813513 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.813531 kubelet[2688]: W0913 01:35:19.813523 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.813607 kubelet[2688]: E0913 01:35:19.813533 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.813666 kubelet[2688]: E0913 01:35:19.813651 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.813666 kubelet[2688]: W0913 01:35:19.813662 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.813739 kubelet[2688]: E0913 01:35:19.813670 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.813796 kubelet[2688]: E0913 01:35:19.813782 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.813796 kubelet[2688]: W0913 01:35:19.813792 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.813869 kubelet[2688]: E0913 01:35:19.813799 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.913217 kubelet[2688]: E0913 01:35:19.913108 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.913217 kubelet[2688]: W0913 01:35:19.913130 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.913217 kubelet[2688]: E0913 01:35:19.913148 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.916994 kubelet[2688]: E0913 01:35:19.916969 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.916994 kubelet[2688]: W0913 01:35:19.916988 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.917144 kubelet[2688]: E0913 01:35:19.917004 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.917339 kubelet[2688]: E0913 01:35:19.917319 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.917339 kubelet[2688]: W0913 01:35:19.917336 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.917426 kubelet[2688]: E0913 01:35:19.917353 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.918173 kubelet[2688]: E0913 01:35:19.918128 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.918173 kubelet[2688]: W0913 01:35:19.918145 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.918173 kubelet[2688]: E0913 01:35:19.918161 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.918606 kubelet[2688]: E0913 01:35:19.918586 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.918606 kubelet[2688]: W0913 01:35:19.918601 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.918717 kubelet[2688]: E0913 01:35:19.918702 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.919073 kubelet[2688]: E0913 01:35:19.919048 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.919073 kubelet[2688]: W0913 01:35:19.919070 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.919175 kubelet[2688]: E0913 01:35:19.919160 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.919371 kubelet[2688]: E0913 01:35:19.919352 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.919427 kubelet[2688]: W0913 01:35:19.919372 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.919475 kubelet[2688]: E0913 01:35:19.919454 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.919591 kubelet[2688]: E0913 01:35:19.919573 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.919591 kubelet[2688]: W0913 01:35:19.919586 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.919661 kubelet[2688]: E0913 01:35:19.919601 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.919825 kubelet[2688]: E0913 01:35:19.919811 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.919825 kubelet[2688]: W0913 01:35:19.919822 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.919904 kubelet[2688]: E0913 01:35:19.919834 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.920041 kubelet[2688]: E0913 01:35:19.920027 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.920041 kubelet[2688]: W0913 01:35:19.920038 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.920146 kubelet[2688]: E0913 01:35:19.920130 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.920453 kubelet[2688]: E0913 01:35:19.920435 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.920453 kubelet[2688]: W0913 01:35:19.920454 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.920547 kubelet[2688]: E0913 01:35:19.920531 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.920708 kubelet[2688]: E0913 01:35:19.920687 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.920708 kubelet[2688]: W0913 01:35:19.920704 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.920792 kubelet[2688]: E0913 01:35:19.920773 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.920933 kubelet[2688]: E0913 01:35:19.920918 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.920933 kubelet[2688]: W0913 01:35:19.920930 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.921054 kubelet[2688]: E0913 01:35:19.921038 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.921199 kubelet[2688]: E0913 01:35:19.921166 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.921257 kubelet[2688]: W0913 01:35:19.921206 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.921257 kubelet[2688]: E0913 01:35:19.921224 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.921447 kubelet[2688]: E0913 01:35:19.921432 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.921447 kubelet[2688]: W0913 01:35:19.921445 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.921521 kubelet[2688]: E0913 01:35:19.921456 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.921646 kubelet[2688]: E0913 01:35:19.921631 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.921646 kubelet[2688]: W0913 01:35:19.921642 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.921752 kubelet[2688]: E0913 01:35:19.921737 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.921917 kubelet[2688]: E0913 01:35:19.921903 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.921917 kubelet[2688]: W0913 01:35:19.921913 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.921999 kubelet[2688]: E0913 01:35:19.921971 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.922153 kubelet[2688]: E0913 01:35:19.922135 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.922153 kubelet[2688]: W0913 01:35:19.922150 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.922256 kubelet[2688]: E0913 01:35:19.922239 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.922402 kubelet[2688]: E0913 01:35:19.922389 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.922402 kubelet[2688]: W0913 01:35:19.922400 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.922480 kubelet[2688]: E0913 01:35:19.922465 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.922606 kubelet[2688]: E0913 01:35:19.922594 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.922606 kubelet[2688]: W0913 01:35:19.922604 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.922728 kubelet[2688]: E0913 01:35:19.922615 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.922872 kubelet[2688]: E0913 01:35:19.922857 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.922911 kubelet[2688]: W0913 01:35:19.922872 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.922911 kubelet[2688]: E0913 01:35:19.922886 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.923111 kubelet[2688]: E0913 01:35:19.923092 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.923111 kubelet[2688]: W0913 01:35:19.923106 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.923249 kubelet[2688]: E0913 01:35:19.923234 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.923434 kubelet[2688]: E0913 01:35:19.923417 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.923434 kubelet[2688]: W0913 01:35:19.923431 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.923560 kubelet[2688]: E0913 01:35:19.923541 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.923673 kubelet[2688]: E0913 01:35:19.923658 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.923673 kubelet[2688]: W0913 01:35:19.923669 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.923777 kubelet[2688]: E0913 01:35:19.923761 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.923941 kubelet[2688]: E0913 01:35:19.923926 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.923941 kubelet[2688]: W0913 01:35:19.923939 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.924021 kubelet[2688]: E0913 01:35:19.924001 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.925627 kubelet[2688]: E0913 01:35:19.925605 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.925627 kubelet[2688]: W0913 01:35:19.925624 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.925721 kubelet[2688]: E0913 01:35:19.925644 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.926420 kubelet[2688]: E0913 01:35:19.925947 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.926484 kubelet[2688]: W0913 01:35:19.926424 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.926545 kubelet[2688]: E0913 01:35:19.926529 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.931700 kubelet[2688]: E0913 01:35:19.931669 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.931700 kubelet[2688]: W0913 01:35:19.931695 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.931820 kubelet[2688]: E0913 01:35:19.931712 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.932048 kubelet[2688]: E0913 01:35:19.932023 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.932048 kubelet[2688]: W0913 01:35:19.932045 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.932156 kubelet[2688]: E0913 01:35:19.932138 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:19.932324 kubelet[2688]: E0913 01:35:19.932305 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:19.932324 kubelet[2688]: W0913 01:35:19.932319 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:19.932384 kubelet[2688]: E0913 01:35:19.932330 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:20.023700 kubelet[2688]: E0913 01:35:20.023668 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:20.023700 kubelet[2688]: W0913 01:35:20.023693 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:20.023852 kubelet[2688]: E0913 01:35:20.023713 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:20.023898 kubelet[2688]: E0913 01:35:20.023877 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:20.023898 kubelet[2688]: W0913 01:35:20.023885 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:20.023898 kubelet[2688]: E0913 01:35:20.023893 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:20.024046 kubelet[2688]: E0913 01:35:20.024025 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:20.024084 kubelet[2688]: W0913 01:35:20.024048 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:20.024084 kubelet[2688]: E0913 01:35:20.024058 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:20.024227 kubelet[2688]: E0913 01:35:20.024213 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:20.024266 kubelet[2688]: W0913 01:35:20.024231 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:20.024266 kubelet[2688]: E0913 01:35:20.024240 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:20.024397 kubelet[2688]: E0913 01:35:20.024379 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:20.024397 kubelet[2688]: W0913 01:35:20.024392 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:20.024451 kubelet[2688]: E0913 01:35:20.024401 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:20.024614 kubelet[2688]: E0913 01:35:20.024595 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:20.024646 kubelet[2688]: W0913 01:35:20.024611 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:20.024646 kubelet[2688]: E0913 01:35:20.024629 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:20.125140 kubelet[2688]: E0913 01:35:20.125113 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:20.125334 kubelet[2688]: W0913 01:35:20.125317 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:20.125400 kubelet[2688]: E0913 01:35:20.125387 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:20.125644 kubelet[2688]: E0913 01:35:20.125632 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:20.125716 kubelet[2688]: W0913 01:35:20.125705 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:20.125780 kubelet[2688]: E0913 01:35:20.125770 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:20.126009 kubelet[2688]: E0913 01:35:20.125997 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:20.126090 kubelet[2688]: W0913 01:35:20.126079 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:20.126154 kubelet[2688]: E0913 01:35:20.126143 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:20.126407 kubelet[2688]: E0913 01:35:20.126396 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:20.126490 kubelet[2688]: W0913 01:35:20.126478 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:20.126553 kubelet[2688]: E0913 01:35:20.126542 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:20.126759 kubelet[2688]: E0913 01:35:20.126748 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:20.126830 kubelet[2688]: W0913 01:35:20.126820 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:20.126899 kubelet[2688]: E0913 01:35:20.126885 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:20.127127 kubelet[2688]: E0913 01:35:20.127116 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:20.127228 kubelet[2688]: W0913 01:35:20.127215 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:20.127297 kubelet[2688]: E0913 01:35:20.127286 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:20.228333 kubelet[2688]: E0913 01:35:20.228307 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:20.228685 kubelet[2688]: W0913 01:35:20.228668 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:20.228750 kubelet[2688]: E0913 01:35:20.228738 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:20.229047 kubelet[2688]: E0913 01:35:20.229034 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:20.229134 kubelet[2688]: W0913 01:35:20.229123 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:20.229211 kubelet[2688]: E0913 01:35:20.229199 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:20.229443 kubelet[2688]: E0913 01:35:20.229432 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:20.229516 kubelet[2688]: W0913 01:35:20.229505 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:20.229585 kubelet[2688]: E0913 01:35:20.229573 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:20.229822 kubelet[2688]: E0913 01:35:20.229810 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:20.229903 kubelet[2688]: W0913 01:35:20.229892 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:20.229961 kubelet[2688]: E0913 01:35:20.229950 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:20.230193 kubelet[2688]: E0913 01:35:20.230171 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:20.230280 kubelet[2688]: W0913 01:35:20.230267 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:20.230339 kubelet[2688]: E0913 01:35:20.230328 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:20.230565 kubelet[2688]: E0913 01:35:20.230553 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:20.230649 kubelet[2688]: W0913 01:35:20.230638 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:20.230712 kubelet[2688]: E0913 01:35:20.230701 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:20.329000 audit[3182]: NETFILTER_CFG table=filter:102 family=2 entries=19 op=nft_register_rule pid=3182 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:35:20.329000 audit[3182]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffd31d1410 a2=0 a3=1 items=0 ppid=2833 pid=3182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:20.329000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:35:20.331734 kubelet[2688]: E0913 01:35:20.331714 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:20.331852 kubelet[2688]: W0913 01:35:20.331837 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:20.331920 kubelet[2688]: E0913 01:35:20.331908 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:20.332160 kubelet[2688]: E0913 01:35:20.332149 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:20.332264 kubelet[2688]: W0913 01:35:20.332251 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:20.332325 kubelet[2688]: E0913 01:35:20.332314 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:20.332548 kubelet[2688]: E0913 01:35:20.332536 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:20.332617 kubelet[2688]: W0913 01:35:20.332606 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:20.332680 kubelet[2688]: E0913 01:35:20.332669 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:20.332902 kubelet[2688]: E0913 01:35:20.332890 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:20.332979 kubelet[2688]: W0913 01:35:20.332968 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:20.333041 kubelet[2688]: E0913 01:35:20.333030 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:20.333323 kubelet[2688]: E0913 01:35:20.333310 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:20.333408 kubelet[2688]: W0913 01:35:20.333395 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:20.333473 kubelet[2688]: E0913 01:35:20.333462 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:20.333735 kubelet[2688]: E0913 01:35:20.333724 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:20.333000 audit[3182]: NETFILTER_CFG table=nat:103 family=2 entries=12 op=nft_register_rule pid=3182 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:35:20.333000 audit[3182]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd31d1410 a2=0 a3=1 items=0 ppid=2833 pid=3182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:20.333000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:35:20.333949 kubelet[2688]: W0913 01:35:20.333803 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:20.334004 kubelet[2688]: E0913 01:35:20.333991 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:20.391111 kubelet[2688]: E0913 01:35:20.391083 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:20.391307 kubelet[2688]: W0913 01:35:20.391287 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:20.391389 kubelet[2688]: E0913 01:35:20.391376 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:20.391628 kubelet[2688]: E0913 01:35:20.391593 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:20.391628 kubelet[2688]: W0913 01:35:20.391615 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:20.391705 kubelet[2688]: E0913 01:35:20.391633 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:20.399286 kubelet[2688]: E0913 01:35:20.399260 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:20.399433 kubelet[2688]: W0913 01:35:20.399418 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:20.399513 kubelet[2688]: E0913 01:35:20.399500 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:20.403827 kubelet[2688]: E0913 01:35:20.403785 2688 secret.go:189] Couldn't get secret calico-system/typha-certs: failed to sync secret cache: timed out waiting for the condition Sep 13 01:35:20.403969 kubelet[2688]: E0913 01:35:20.403934 2688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8ab58f9-2b9f-4500-acee-1efce0ad2db2-typha-certs podName:c8ab58f9-2b9f-4500-acee-1efce0ad2db2 nodeName:}" failed. No retries permitted until 2025-09-13 01:35:20.903911241 +0000 UTC m=+22.404749473 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "typha-certs" (UniqueName: "kubernetes.io/secret/c8ab58f9-2b9f-4500-acee-1efce0ad2db2-typha-certs") pod "calico-typha-7f5d6b5b94-z8zh4" (UID: "c8ab58f9-2b9f-4500-acee-1efce0ad2db2") : failed to sync secret cache: timed out waiting for the condition Sep 13 01:35:20.403969 kubelet[2688]: E0913 01:35:20.403784 2688 configmap.go:193] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Sep 13 01:35:20.404059 kubelet[2688]: E0913 01:35:20.403981 2688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c8ab58f9-2b9f-4500-acee-1efce0ad2db2-tigera-ca-bundle podName:c8ab58f9-2b9f-4500-acee-1efce0ad2db2 nodeName:}" failed. No retries permitted until 2025-09-13 01:35:20.903974201 +0000 UTC m=+22.404812433 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/c8ab58f9-2b9f-4500-acee-1efce0ad2db2-tigera-ca-bundle") pod "calico-typha-7f5d6b5b94-z8zh4" (UID: "c8ab58f9-2b9f-4500-acee-1efce0ad2db2") : failed to sync configmap cache: timed out waiting for the condition Sep 13 01:35:20.434784 kubelet[2688]: E0913 01:35:20.434756 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:20.434959 kubelet[2688]: W0913 01:35:20.434943 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:20.435030 kubelet[2688]: E0913 01:35:20.435018 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:20.435282 kubelet[2688]: E0913 01:35:20.435269 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:20.435371 kubelet[2688]: W0913 01:35:20.435359 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:20.435430 kubelet[2688]: E0913 01:35:20.435419 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:20.435658 kubelet[2688]: E0913 01:35:20.435646 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:20.435742 kubelet[2688]: W0913 01:35:20.435730 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:20.435805 kubelet[2688]: E0913 01:35:20.435795 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:20.537440 kubelet[2688]: E0913 01:35:20.537339 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:20.537594 kubelet[2688]: W0913 01:35:20.537569 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:20.537670 kubelet[2688]: E0913 01:35:20.537658 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:20.538860 kubelet[2688]: E0913 01:35:20.538835 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:20.538977 kubelet[2688]: W0913 01:35:20.538964 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:20.539036 kubelet[2688]: E0913 01:35:20.539025 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:20.539993 kubelet[2688]: E0913 01:35:20.539977 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:20.540119 kubelet[2688]: W0913 01:35:20.540105 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:20.540201 kubelet[2688]: E0913 01:35:20.540175 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:20.569866 kubelet[2688]: E0913 01:35:20.569834 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:20.569866 kubelet[2688]: W0913 01:35:20.569856 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:20.569866 kubelet[2688]: E0913 01:35:20.569874 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:20.640769 kubelet[2688]: E0913 01:35:20.640733 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:20.640769 kubelet[2688]: W0913 01:35:20.640760 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:20.640936 kubelet[2688]: E0913 01:35:20.640792 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:20.640995 kubelet[2688]: E0913 01:35:20.640977 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:20.640995 kubelet[2688]: W0913 01:35:20.640992 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:20.641060 kubelet[2688]: E0913 01:35:20.641002 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:20.664636 env[1589]: time="2025-09-13T01:35:20.664119615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cc8fs,Uid:21c33e44-175f-4bf4-be30-df4cfb08d7d7,Namespace:calico-system,Attempt:0,}" Sep 13 01:35:20.701884 env[1589]: time="2025-09-13T01:35:20.701823245Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:35:20.702095 env[1589]: time="2025-09-13T01:35:20.702072485Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:35:20.702200 env[1589]: time="2025-09-13T01:35:20.702167324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:35:20.702517 env[1589]: time="2025-09-13T01:35:20.702463964Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/239dfc2437f367972d06a5b4f9116e68d8c355bdb5a45cdbf2886ee435d3f0cb pid=3212 runtime=io.containerd.runc.v2 Sep 13 01:35:20.723164 systemd[1]: run-containerd-runc-k8s.io-239dfc2437f367972d06a5b4f9116e68d8c355bdb5a45cdbf2886ee435d3f0cb-runc.queDer.mount: Deactivated successfully. Sep 13 01:35:20.741894 kubelet[2688]: E0913 01:35:20.741864 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:20.742043 kubelet[2688]: W0913 01:35:20.742028 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:20.742132 kubelet[2688]: E0913 01:35:20.742120 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:20.742491 kubelet[2688]: E0913 01:35:20.742478 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:20.742601 kubelet[2688]: W0913 01:35:20.742587 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:20.742673 kubelet[2688]: E0913 01:35:20.742662 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:20.750337 env[1589]: time="2025-09-13T01:35:20.750300900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cc8fs,Uid:21c33e44-175f-4bf4-be30-df4cfb08d7d7,Namespace:calico-system,Attempt:0,} returns sandbox id \"239dfc2437f367972d06a5b4f9116e68d8c355bdb5a45cdbf2886ee435d3f0cb\"" Sep 13 01:35:20.753696 env[1589]: time="2025-09-13T01:35:20.753389376Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 13 01:35:20.843576 kubelet[2688]: E0913 01:35:20.843475 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:20.843576 kubelet[2688]: W0913 01:35:20.843499 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:20.843576 kubelet[2688]: E0913 01:35:20.843520 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:20.844386 kubelet[2688]: E0913 01:35:20.844359 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:20.844386 kubelet[2688]: W0913 01:35:20.844377 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:20.844493 kubelet[2688]: E0913 01:35:20.844390 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:20.945150 kubelet[2688]: E0913 01:35:20.945118 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:20.945150 kubelet[2688]: W0913 01:35:20.945142 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:20.945366 kubelet[2688]: E0913 01:35:20.945164 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:20.945448 kubelet[2688]: E0913 01:35:20.945427 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:20.945448 kubelet[2688]: W0913 01:35:20.945443 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:20.945526 kubelet[2688]: E0913 01:35:20.945460 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:20.945677 kubelet[2688]: E0913 01:35:20.945662 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:20.945677 kubelet[2688]: W0913 01:35:20.945675 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:20.945763 kubelet[2688]: E0913 01:35:20.945690 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:20.945848 kubelet[2688]: E0913 01:35:20.945832 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:20.945848 kubelet[2688]: W0913 01:35:20.945845 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:20.945848 kubelet[2688]: E0913 01:35:20.945854 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:20.945987 kubelet[2688]: E0913 01:35:20.945976 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:20.945987 kubelet[2688]: W0913 01:35:20.945985 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:20.946067 kubelet[2688]: E0913 01:35:20.945998 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:20.946174 kubelet[2688]: E0913 01:35:20.946155 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:20.946174 kubelet[2688]: W0913 01:35:20.946167 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:20.946174 kubelet[2688]: E0913 01:35:20.946175 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:20.946621 kubelet[2688]: E0913 01:35:20.946462 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:20.946621 kubelet[2688]: W0913 01:35:20.946476 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:20.946621 kubelet[2688]: E0913 01:35:20.946489 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:20.946938 kubelet[2688]: E0913 01:35:20.946807 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:20.946938 kubelet[2688]: W0913 01:35:20.946822 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:20.946938 kubelet[2688]: E0913 01:35:20.946849 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:20.947262 kubelet[2688]: E0913 01:35:20.947105 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:20.947262 kubelet[2688]: W0913 01:35:20.947119 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:20.947262 kubelet[2688]: E0913 01:35:20.947143 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:20.947764 kubelet[2688]: E0913 01:35:20.947443 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:20.947764 kubelet[2688]: W0913 01:35:20.947456 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:20.947764 kubelet[2688]: E0913 01:35:20.947481 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:20.947764 kubelet[2688]: E0913 01:35:20.947670 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:20.947764 kubelet[2688]: W0913 01:35:20.947681 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:20.947764 kubelet[2688]: E0913 01:35:20.947692 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:20.955525 kubelet[2688]: E0913 01:35:20.955495 2688 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:35:20.955667 kubelet[2688]: W0913 01:35:20.955652 2688 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:35:20.955764 kubelet[2688]: E0913 01:35:20.955751 2688 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:35:20.999002 env[1589]: time="2025-09-13T01:35:20.998954970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7f5d6b5b94-z8zh4,Uid:c8ab58f9-2b9f-4500-acee-1efce0ad2db2,Namespace:calico-system,Attempt:0,}" Sep 13 01:35:21.032983 env[1589]: time="2025-09-13T01:35:21.032907926Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:35:21.033129 env[1589]: time="2025-09-13T01:35:21.032951086Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:35:21.033129 env[1589]: time="2025-09-13T01:35:21.032967206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:35:21.033333 env[1589]: time="2025-09-13T01:35:21.033301605Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/150a09ae5d2fcde24750af263b6a20f458eff603fddc5c302e3a7d616a8d96df pid=3269 runtime=io.containerd.runc.v2 Sep 13 01:35:21.077693 env[1589]: time="2025-09-13T01:35:21.077627748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7f5d6b5b94-z8zh4,Uid:c8ab58f9-2b9f-4500-acee-1efce0ad2db2,Namespace:calico-system,Attempt:0,} returns sandbox id \"150a09ae5d2fcde24750af263b6a20f458eff603fddc5c302e3a7d616a8d96df\"" Sep 13 01:35:21.342000 audit[3303]: NETFILTER_CFG table=filter:104 family=2 entries=21 op=nft_register_rule pid=3303 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:35:21.342000 audit[3303]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffe7ab6d00 a2=0 a3=1 items=0 ppid=2833 pid=3303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:21.342000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:35:21.348000 audit[3303]: NETFILTER_CFG table=nat:105 family=2 entries=12 op=nft_register_rule pid=3303 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:35:21.348000 audit[3303]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe7ab6d00 a2=0 a3=1 items=0 ppid=2833 pid=3303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:21.348000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:35:21.604255 kubelet[2688]: E0913 01:35:21.603710 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vpflq" podUID="77870db6-b52e-4395-a518-9c1b7d66eb0e" Sep 13 01:35:22.035170 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2929717809.mount: Deactivated successfully. Sep 13 01:35:22.209494 env[1589]: time="2025-09-13T01:35:22.209448161Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:35:22.216746 env[1589]: time="2025-09-13T01:35:22.216705831Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:35:22.222137 env[1589]: time="2025-09-13T01:35:22.222099984Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:35:22.226935 env[1589]: time="2025-09-13T01:35:22.226896818Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:35:22.227319 env[1589]: time="2025-09-13T01:35:22.227293978Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 13 01:35:22.229618 env[1589]: time="2025-09-13T01:35:22.229587175Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 13 01:35:22.230618 env[1589]: time="2025-09-13T01:35:22.230589014Z" level=info msg="CreateContainer within sandbox \"239dfc2437f367972d06a5b4f9116e68d8c355bdb5a45cdbf2886ee435d3f0cb\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 13 01:35:22.261906 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount901364549.mount: Deactivated successfully. Sep 13 01:35:22.278110 env[1589]: time="2025-09-13T01:35:22.278053313Z" level=info msg="CreateContainer within sandbox \"239dfc2437f367972d06a5b4f9116e68d8c355bdb5a45cdbf2886ee435d3f0cb\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"8e6ee12cde916f1fb1cf9e37ed52123697341c2fbf32a25d08731be43b9c61a1\"" Sep 13 01:35:22.278984 env[1589]: time="2025-09-13T01:35:22.278944032Z" level=info msg="StartContainer for \"8e6ee12cde916f1fb1cf9e37ed52123697341c2fbf32a25d08731be43b9c61a1\"" Sep 13 01:35:22.343590 env[1589]: time="2025-09-13T01:35:22.343160790Z" level=info msg="StartContainer for \"8e6ee12cde916f1fb1cf9e37ed52123697341c2fbf32a25d08731be43b9c61a1\" returns successfully" Sep 13 01:35:23.036081 env[1589]: time="2025-09-13T01:35:23.036037668Z" level=info msg="shim disconnected" id=8e6ee12cde916f1fb1cf9e37ed52123697341c2fbf32a25d08731be43b9c61a1 Sep 13 01:35:23.036323 env[1589]: time="2025-09-13T01:35:23.036304428Z" level=warning msg="cleaning up after shim disconnected" id=8e6ee12cde916f1fb1cf9e37ed52123697341c2fbf32a25d08731be43b9c61a1 namespace=k8s.io Sep 13 01:35:23.036386 env[1589]: time="2025-09-13T01:35:23.036374388Z" level=info msg="cleaning up dead shim" Sep 13 01:35:23.043018 env[1589]: time="2025-09-13T01:35:23.042983140Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:35:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3360 runtime=io.containerd.runc.v2\n" Sep 13 01:35:23.603566 kubelet[2688]: E0913 01:35:23.603516 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vpflq" podUID="77870db6-b52e-4395-a518-9c1b7d66eb0e" Sep 13 01:35:24.567633 env[1589]: time="2025-09-13T01:35:24.567577891Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:35:24.575701 env[1589]: time="2025-09-13T01:35:24.575653242Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:35:24.580538 env[1589]: time="2025-09-13T01:35:24.580501716Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:35:24.585431 env[1589]: time="2025-09-13T01:35:24.585396950Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:35:24.586070 env[1589]: time="2025-09-13T01:35:24.586043309Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 13 01:35:24.595230 env[1589]: time="2025-09-13T01:35:24.588616746Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 13 01:35:24.602682 env[1589]: time="2025-09-13T01:35:24.602632969Z" level=info msg="CreateContainer within sandbox \"150a09ae5d2fcde24750af263b6a20f458eff603fddc5c302e3a7d616a8d96df\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 13 01:35:24.635895 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3014890226.mount: Deactivated successfully. Sep 13 01:35:24.653042 env[1589]: time="2025-09-13T01:35:24.652976387Z" level=info msg="CreateContainer within sandbox \"150a09ae5d2fcde24750af263b6a20f458eff603fddc5c302e3a7d616a8d96df\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"97a4b5c02f9e61bad9bc122cfe3f2a9b1401b5257be4c6de50edb1705a63f86c\"" Sep 13 01:35:24.654005 env[1589]: time="2025-09-13T01:35:24.653979506Z" level=info msg="StartContainer for \"97a4b5c02f9e61bad9bc122cfe3f2a9b1401b5257be4c6de50edb1705a63f86c\"" Sep 13 01:35:24.717097 env[1589]: time="2025-09-13T01:35:24.717053309Z" level=info msg="StartContainer for \"97a4b5c02f9e61bad9bc122cfe3f2a9b1401b5257be4c6de50edb1705a63f86c\" returns successfully" Sep 13 01:35:24.910248 kubelet[2688]: I0913 01:35:24.910109 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7f5d6b5b94-z8zh4" podStartSLOduration=2.401534592 podStartE2EDuration="5.910092273s" podCreationTimestamp="2025-09-13 01:35:19 +0000 UTC" firstStartedPulling="2025-09-13 01:35:21.078904466 +0000 UTC m=+22.579742698" lastFinishedPulling="2025-09-13 01:35:24.587462187 +0000 UTC m=+26.088300379" observedRunningTime="2025-09-13 01:35:24.909922113 +0000 UTC m=+26.410760345" watchObservedRunningTime="2025-09-13 01:35:24.910092273 +0000 UTC m=+26.410930505" Sep 13 01:35:25.603463 kubelet[2688]: E0913 01:35:25.603411 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vpflq" podUID="77870db6-b52e-4395-a518-9c1b7d66eb0e" Sep 13 01:35:25.939000 audit[3416]: NETFILTER_CFG table=filter:106 family=2 entries=21 op=nft_register_rule pid=3416 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:35:25.944576 kernel: kauditd_printk_skb: 14 callbacks suppressed Sep 13 01:35:25.944715 kernel: audit: type=1325 audit(1757727325.939:309): table=filter:106 family=2 entries=21 op=nft_register_rule pid=3416 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:35:25.939000 audit[3416]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffe24921f0 a2=0 a3=1 items=0 ppid=2833 pid=3416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:25.985795 kernel: audit: type=1300 audit(1757727325.939:309): arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffe24921f0 a2=0 a3=1 items=0 ppid=2833 pid=3416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:25.939000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:35:26.002250 kernel: audit: type=1327 audit(1757727325.939:309): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:35:26.005000 audit[3416]: NETFILTER_CFG table=nat:107 family=2 entries=19 op=nft_register_chain pid=3416 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:35:26.019390 kernel: audit: type=1325 audit(1757727326.005:310): table=nat:107 family=2 entries=19 op=nft_register_chain pid=3416 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:35:26.005000 audit[3416]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffe24921f0 a2=0 a3=1 items=0 ppid=2833 pid=3416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:26.045863 kernel: audit: type=1300 audit(1757727326.005:310): arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffe24921f0 a2=0 a3=1 items=0 ppid=2833 pid=3416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:26.005000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:35:26.060982 kernel: audit: type=1327 audit(1757727326.005:310): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:35:27.603453 kubelet[2688]: E0913 01:35:27.603382 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vpflq" podUID="77870db6-b52e-4395-a518-9c1b7d66eb0e" Sep 13 01:35:28.352950 env[1589]: time="2025-09-13T01:35:28.352896081Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:35:28.359881 env[1589]: time="2025-09-13T01:35:28.359841353Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:35:28.363809 env[1589]: time="2025-09-13T01:35:28.363781069Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:35:28.368062 env[1589]: time="2025-09-13T01:35:28.368017984Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:35:28.368638 env[1589]: time="2025-09-13T01:35:28.368610223Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 13 01:35:28.372970 env[1589]: time="2025-09-13T01:35:28.372936818Z" level=info msg="CreateContainer within sandbox \"239dfc2437f367972d06a5b4f9116e68d8c355bdb5a45cdbf2886ee435d3f0cb\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 13 01:35:28.405039 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount557900865.mount: Deactivated successfully. Sep 13 01:35:28.425417 env[1589]: time="2025-09-13T01:35:28.425374799Z" level=info msg="CreateContainer within sandbox \"239dfc2437f367972d06a5b4f9116e68d8c355bdb5a45cdbf2886ee435d3f0cb\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9180e27ff634438f2dd4d2e3e4da6789ad00bf815e8d06357b8f12dd46511ada\"" Sep 13 01:35:28.426680 env[1589]: time="2025-09-13T01:35:28.425979998Z" level=info msg="StartContainer for \"9180e27ff634438f2dd4d2e3e4da6789ad00bf815e8d06357b8f12dd46511ada\"" Sep 13 01:35:28.449954 systemd[1]: run-containerd-runc-k8s.io-9180e27ff634438f2dd4d2e3e4da6789ad00bf815e8d06357b8f12dd46511ada-runc.zDRvvb.mount: Deactivated successfully. Sep 13 01:35:28.495443 env[1589]: time="2025-09-13T01:35:28.493339322Z" level=info msg="StartContainer for \"9180e27ff634438f2dd4d2e3e4da6789ad00bf815e8d06357b8f12dd46511ada\" returns successfully" Sep 13 01:35:29.604044 kubelet[2688]: E0913 01:35:29.603992 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vpflq" podUID="77870db6-b52e-4395-a518-9c1b7d66eb0e" Sep 13 01:35:30.250703 env[1589]: time="2025-09-13T01:35:30.250632051Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 01:35:30.268401 kubelet[2688]: I0913 01:35:30.268377 2688 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 13 01:35:30.276229 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9180e27ff634438f2dd4d2e3e4da6789ad00bf815e8d06357b8f12dd46511ada-rootfs.mount: Deactivated successfully. Sep 13 01:35:30.398564 kubelet[2688]: I0913 01:35:30.398516 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmcmf\" (UniqueName: \"kubernetes.io/projected/e8755f74-5b78-4c9e-aa5f-ad0f5cad8960-kube-api-access-nmcmf\") pod \"whisker-544dd58775-5nxng\" (UID: \"e8755f74-5b78-4c9e-aa5f-ad0f5cad8960\") " pod="calico-system/whisker-544dd58775-5nxng" Sep 13 01:35:30.398857 kubelet[2688]: I0913 01:35:30.398836 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e8755f74-5b78-4c9e-aa5f-ad0f5cad8960-whisker-backend-key-pair\") pod \"whisker-544dd58775-5nxng\" (UID: \"e8755f74-5b78-4c9e-aa5f-ad0f5cad8960\") " pod="calico-system/whisker-544dd58775-5nxng" Sep 13 01:35:30.399040 kubelet[2688]: I0913 01:35:30.399025 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hrl2\" (UniqueName: \"kubernetes.io/projected/4867af55-bd3a-4712-8e0f-bf048a45159f-kube-api-access-2hrl2\") pod \"coredns-7c65d6cfc9-jxjcc\" (UID: \"4867af55-bd3a-4712-8e0f-bf048a45159f\") " pod="kube-system/coredns-7c65d6cfc9-jxjcc" Sep 13 01:35:30.399175 kubelet[2688]: I0913 01:35:30.399161 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgmbd\" (UniqueName: \"kubernetes.io/projected/7bfd1f1e-041a-4f50-ba7b-87c117566d38-kube-api-access-dgmbd\") pod \"calico-apiserver-948d44db6-dqp2g\" (UID: \"7bfd1f1e-041a-4f50-ba7b-87c117566d38\") " pod="calico-apiserver/calico-apiserver-948d44db6-dqp2g" Sep 13 01:35:30.399368 kubelet[2688]: I0913 01:35:30.399346 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4867af55-bd3a-4712-8e0f-bf048a45159f-config-volume\") pod \"coredns-7c65d6cfc9-jxjcc\" (UID: \"4867af55-bd3a-4712-8e0f-bf048a45159f\") " pod="kube-system/coredns-7c65d6cfc9-jxjcc" Sep 13 01:35:30.399476 kubelet[2688]: I0913 01:35:30.399462 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7bfd1f1e-041a-4f50-ba7b-87c117566d38-calico-apiserver-certs\") pod \"calico-apiserver-948d44db6-dqp2g\" (UID: \"7bfd1f1e-041a-4f50-ba7b-87c117566d38\") " pod="calico-apiserver/calico-apiserver-948d44db6-dqp2g" Sep 13 01:35:30.400224 kubelet[2688]: I0913 01:35:30.400166 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e8755f74-5b78-4c9e-aa5f-ad0f5cad8960-whisker-ca-bundle\") pod \"whisker-544dd58775-5nxng\" (UID: \"e8755f74-5b78-4c9e-aa5f-ad0f5cad8960\") " pod="calico-system/whisker-544dd58775-5nxng" Sep 13 01:35:30.500615 kubelet[2688]: I0913 01:35:30.500575 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m98s7\" (UniqueName: \"kubernetes.io/projected/18937242-9ba8-46ed-bd00-32bfe4ab9056-kube-api-access-m98s7\") pod \"coredns-7c65d6cfc9-7rjjf\" (UID: \"18937242-9ba8-46ed-bd00-32bfe4ab9056\") " pod="kube-system/coredns-7c65d6cfc9-7rjjf" Sep 13 01:35:30.501589 kubelet[2688]: I0913 01:35:30.501232 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8pq5\" (UniqueName: \"kubernetes.io/projected/ccc7d775-178b-498c-91bf-f43ba47754ca-kube-api-access-v8pq5\") pod \"calico-kube-controllers-84d8c5c799-tjhw6\" (UID: \"ccc7d775-178b-498c-91bf-f43ba47754ca\") " pod="calico-system/calico-kube-controllers-84d8c5c799-tjhw6" Sep 13 01:35:30.501769 kubelet[2688]: I0913 01:35:30.501739 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1b67e48-bb3b-4ff4-9ed0-8953d7a15f03-config\") pod \"goldmane-7988f88666-2qlzv\" (UID: \"e1b67e48-bb3b-4ff4-9ed0-8953d7a15f03\") " pod="calico-system/goldmane-7988f88666-2qlzv" Sep 13 01:35:30.501880 kubelet[2688]: I0913 01:35:30.501864 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e1b67e48-bb3b-4ff4-9ed0-8953d7a15f03-goldmane-ca-bundle\") pod \"goldmane-7988f88666-2qlzv\" (UID: \"e1b67e48-bb3b-4ff4-9ed0-8953d7a15f03\") " pod="calico-system/goldmane-7988f88666-2qlzv" Sep 13 01:35:30.502000 kubelet[2688]: I0913 01:35:30.501986 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/42ab5820-ae33-4608-be70-9aaefef7b587-calico-apiserver-certs\") pod \"calico-apiserver-948d44db6-dx4zf\" (UID: \"42ab5820-ae33-4608-be70-9aaefef7b587\") " pod="calico-apiserver/calico-apiserver-948d44db6-dx4zf" Sep 13 01:35:30.502112 kubelet[2688]: I0913 01:35:30.502098 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3671d0d8-9f53-4f15-828c-de3c99528bf3-calico-apiserver-certs\") pod \"calico-apiserver-cb6df9659-4gj4b\" (UID: \"3671d0d8-9f53-4f15-828c-de3c99528bf3\") " pod="calico-apiserver/calico-apiserver-cb6df9659-4gj4b" Sep 13 01:35:30.502220 kubelet[2688]: I0913 01:35:30.502205 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx8gv\" (UniqueName: \"kubernetes.io/projected/42ab5820-ae33-4608-be70-9aaefef7b587-kube-api-access-dx8gv\") pod \"calico-apiserver-948d44db6-dx4zf\" (UID: \"42ab5820-ae33-4608-be70-9aaefef7b587\") " pod="calico-apiserver/calico-apiserver-948d44db6-dx4zf" Sep 13 01:35:30.502323 kubelet[2688]: I0913 01:35:30.502310 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ccc7d775-178b-498c-91bf-f43ba47754ca-tigera-ca-bundle\") pod \"calico-kube-controllers-84d8c5c799-tjhw6\" (UID: \"ccc7d775-178b-498c-91bf-f43ba47754ca\") " pod="calico-system/calico-kube-controllers-84d8c5c799-tjhw6" Sep 13 01:35:30.502420 kubelet[2688]: I0913 01:35:30.502407 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsmkl\" (UniqueName: \"kubernetes.io/projected/e1b67e48-bb3b-4ff4-9ed0-8953d7a15f03-kube-api-access-bsmkl\") pod \"goldmane-7988f88666-2qlzv\" (UID: \"e1b67e48-bb3b-4ff4-9ed0-8953d7a15f03\") " pod="calico-system/goldmane-7988f88666-2qlzv" Sep 13 01:35:30.502530 kubelet[2688]: I0913 01:35:30.502515 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vspg7\" (UniqueName: \"kubernetes.io/projected/3671d0d8-9f53-4f15-828c-de3c99528bf3-kube-api-access-vspg7\") pod \"calico-apiserver-cb6df9659-4gj4b\" (UID: \"3671d0d8-9f53-4f15-828c-de3c99528bf3\") " pod="calico-apiserver/calico-apiserver-cb6df9659-4gj4b" Sep 13 01:35:30.502645 kubelet[2688]: I0913 01:35:30.502631 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/e1b67e48-bb3b-4ff4-9ed0-8953d7a15f03-goldmane-key-pair\") pod \"goldmane-7988f88666-2qlzv\" (UID: \"e1b67e48-bb3b-4ff4-9ed0-8953d7a15f03\") " pod="calico-system/goldmane-7988f88666-2qlzv" Sep 13 01:35:30.502743 kubelet[2688]: I0913 01:35:30.502729 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18937242-9ba8-46ed-bd00-32bfe4ab9056-config-volume\") pod \"coredns-7c65d6cfc9-7rjjf\" (UID: \"18937242-9ba8-46ed-bd00-32bfe4ab9056\") " pod="kube-system/coredns-7c65d6cfc9-7rjjf" Sep 13 01:35:30.621735 env[1589]: time="2025-09-13T01:35:30.618846291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-jxjcc,Uid:4867af55-bd3a-4712-8e0f-bf048a45159f,Namespace:kube-system,Attempt:0,}" Sep 13 01:35:30.621735 env[1589]: time="2025-09-13T01:35:30.621527168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-544dd58775-5nxng,Uid:e8755f74-5b78-4c9e-aa5f-ad0f5cad8960,Namespace:calico-system,Attempt:0,}" Sep 13 01:35:30.624532 env[1589]: time="2025-09-13T01:35:30.624341045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-948d44db6-dqp2g,Uid:7bfd1f1e-041a-4f50-ba7b-87c117566d38,Namespace:calico-apiserver,Attempt:0,}" Sep 13 01:35:30.636846 env[1589]: time="2025-09-13T01:35:30.636591512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-2qlzv,Uid:e1b67e48-bb3b-4ff4-9ed0-8953d7a15f03,Namespace:calico-system,Attempt:0,}" Sep 13 01:35:30.640716 env[1589]: time="2025-09-13T01:35:30.640539387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-cb6df9659-4gj4b,Uid:3671d0d8-9f53-4f15-828c-de3c99528bf3,Namespace:calico-apiserver,Attempt:0,}" Sep 13 01:35:30.653511 env[1589]: time="2025-09-13T01:35:30.653470173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7rjjf,Uid:18937242-9ba8-46ed-bd00-32bfe4ab9056,Namespace:kube-system,Attempt:0,}" Sep 13 01:35:30.655329 env[1589]: time="2025-09-13T01:35:30.655156291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84d8c5c799-tjhw6,Uid:ccc7d775-178b-498c-91bf-f43ba47754ca,Namespace:calico-system,Attempt:0,}" Sep 13 01:35:30.655877 env[1589]: time="2025-09-13T01:35:30.655845211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-948d44db6-dx4zf,Uid:42ab5820-ae33-4608-be70-9aaefef7b587,Namespace:calico-apiserver,Attempt:0,}" Sep 13 01:35:31.136101 env[1589]: time="2025-09-13T01:35:31.135907332Z" level=info msg="shim disconnected" id=9180e27ff634438f2dd4d2e3e4da6789ad00bf815e8d06357b8f12dd46511ada Sep 13 01:35:31.136101 env[1589]: time="2025-09-13T01:35:31.135951372Z" level=warning msg="cleaning up after shim disconnected" id=9180e27ff634438f2dd4d2e3e4da6789ad00bf815e8d06357b8f12dd46511ada namespace=k8s.io Sep 13 01:35:31.136101 env[1589]: time="2025-09-13T01:35:31.135962052Z" level=info msg="cleaning up dead shim" Sep 13 01:35:31.143248 env[1589]: time="2025-09-13T01:35:31.143200764Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:35:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3481 runtime=io.containerd.runc.v2\n" Sep 13 01:35:31.376394 env[1589]: time="2025-09-13T01:35:31.376321196Z" level=error msg="Failed to destroy network for sandbox \"aa87d586d2429d57663c418daf2c06b2229a62b7799ff27d3300900147b420a7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:35:31.376733 env[1589]: time="2025-09-13T01:35:31.376673435Z" level=error msg="encountered an error cleaning up failed sandbox \"aa87d586d2429d57663c418daf2c06b2229a62b7799ff27d3300900147b420a7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:35:31.376733 env[1589]: time="2025-09-13T01:35:31.376717595Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-948d44db6-dqp2g,Uid:7bfd1f1e-041a-4f50-ba7b-87c117566d38,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"aa87d586d2429d57663c418daf2c06b2229a62b7799ff27d3300900147b420a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:35:31.377397 kubelet[2688]: E0913 01:35:31.376974 2688 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa87d586d2429d57663c418daf2c06b2229a62b7799ff27d3300900147b420a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:35:31.377397 kubelet[2688]: E0913 01:35:31.377042 2688 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa87d586d2429d57663c418daf2c06b2229a62b7799ff27d3300900147b420a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-948d44db6-dqp2g" Sep 13 01:35:31.377397 kubelet[2688]: E0913 01:35:31.377061 2688 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa87d586d2429d57663c418daf2c06b2229a62b7799ff27d3300900147b420a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-948d44db6-dqp2g" Sep 13 01:35:31.379135 kubelet[2688]: E0913 01:35:31.377099 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-948d44db6-dqp2g_calico-apiserver(7bfd1f1e-041a-4f50-ba7b-87c117566d38)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-948d44db6-dqp2g_calico-apiserver(7bfd1f1e-041a-4f50-ba7b-87c117566d38)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aa87d586d2429d57663c418daf2c06b2229a62b7799ff27d3300900147b420a7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-948d44db6-dqp2g" podUID="7bfd1f1e-041a-4f50-ba7b-87c117566d38" Sep 13 01:35:31.571425 env[1589]: time="2025-09-13T01:35:31.571364148Z" level=error msg="Failed to destroy network for sandbox \"7a8a6757ecb67bf01ea0b9f30ef8745b147d91eb7c504386679114bae63fa6c4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:35:31.574153 env[1589]: time="2025-09-13T01:35:31.571715107Z" level=error msg="encountered an error cleaning up failed sandbox \"7a8a6757ecb67bf01ea0b9f30ef8745b147d91eb7c504386679114bae63fa6c4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:35:31.574153 env[1589]: time="2025-09-13T01:35:31.571759987Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-jxjcc,Uid:4867af55-bd3a-4712-8e0f-bf048a45159f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7a8a6757ecb67bf01ea0b9f30ef8745b147d91eb7c504386679114bae63fa6c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:35:31.574456 kubelet[2688]: E0913 01:35:31.573965 2688 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a8a6757ecb67bf01ea0b9f30ef8745b147d91eb7c504386679114bae63fa6c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:35:31.574456 kubelet[2688]: E0913 01:35:31.574233 2688 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a8a6757ecb67bf01ea0b9f30ef8745b147d91eb7c504386679114bae63fa6c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-jxjcc" Sep 13 01:35:31.574456 kubelet[2688]: E0913 01:35:31.574257 2688 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a8a6757ecb67bf01ea0b9f30ef8745b147d91eb7c504386679114bae63fa6c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-jxjcc" Sep 13 01:35:31.574696 kubelet[2688]: E0913 01:35:31.574403 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-jxjcc_kube-system(4867af55-bd3a-4712-8e0f-bf048a45159f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-jxjcc_kube-system(4867af55-bd3a-4712-8e0f-bf048a45159f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7a8a6757ecb67bf01ea0b9f30ef8745b147d91eb7c504386679114bae63fa6c4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-jxjcc" podUID="4867af55-bd3a-4712-8e0f-bf048a45159f" Sep 13 01:35:31.597438 env[1589]: time="2025-09-13T01:35:31.597372000Z" level=error msg="Failed to destroy network for sandbox \"292dafe03472e7f18f72ce1ad965a65059f68326ea9991e6b6168772bc888450\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:35:31.602387 env[1589]: time="2025-09-13T01:35:31.597730200Z" level=error msg="encountered an error cleaning up failed sandbox \"292dafe03472e7f18f72ce1ad965a65059f68326ea9991e6b6168772bc888450\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:35:31.602387 env[1589]: time="2025-09-13T01:35:31.597775680Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-544dd58775-5nxng,Uid:e8755f74-5b78-4c9e-aa5f-ad0f5cad8960,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"292dafe03472e7f18f72ce1ad965a65059f68326ea9991e6b6168772bc888450\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:35:31.602387 env[1589]: time="2025-09-13T01:35:31.597900639Z" level=error msg="Failed to destroy network for sandbox \"74e77a1cea79225691f9048a3084a72482df9836e108942842c2fec9c46b29b9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:35:31.602387 env[1589]: time="2025-09-13T01:35:31.598156879Z" level=error msg="encountered an error cleaning up failed sandbox \"74e77a1cea79225691f9048a3084a72482df9836e108942842c2fec9c46b29b9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:35:31.602387 env[1589]: time="2025-09-13T01:35:31.598207399Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-cb6df9659-4gj4b,Uid:3671d0d8-9f53-4f15-828c-de3c99528bf3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"74e77a1cea79225691f9048a3084a72482df9836e108942842c2fec9c46b29b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:35:31.602589 kubelet[2688]: E0913 01:35:31.602005 2688 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74e77a1cea79225691f9048a3084a72482df9836e108942842c2fec9c46b29b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:35:31.602589 kubelet[2688]: E0913 01:35:31.602061 2688 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74e77a1cea79225691f9048a3084a72482df9836e108942842c2fec9c46b29b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-cb6df9659-4gj4b" Sep 13 01:35:31.602589 kubelet[2688]: E0913 01:35:31.602080 2688 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74e77a1cea79225691f9048a3084a72482df9836e108942842c2fec9c46b29b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-cb6df9659-4gj4b" Sep 13 01:35:31.602680 kubelet[2688]: E0913 01:35:31.602121 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-cb6df9659-4gj4b_calico-apiserver(3671d0d8-9f53-4f15-828c-de3c99528bf3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-cb6df9659-4gj4b_calico-apiserver(3671d0d8-9f53-4f15-828c-de3c99528bf3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"74e77a1cea79225691f9048a3084a72482df9836e108942842c2fec9c46b29b9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-cb6df9659-4gj4b" podUID="3671d0d8-9f53-4f15-828c-de3c99528bf3" Sep 13 01:35:31.607142 kubelet[2688]: E0913 01:35:31.605945 2688 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"292dafe03472e7f18f72ce1ad965a65059f68326ea9991e6b6168772bc888450\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:35:31.607142 kubelet[2688]: E0913 01:35:31.605992 2688 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"292dafe03472e7f18f72ce1ad965a65059f68326ea9991e6b6168772bc888450\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-544dd58775-5nxng" Sep 13 01:35:31.607142 kubelet[2688]: E0913 01:35:31.606020 2688 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"292dafe03472e7f18f72ce1ad965a65059f68326ea9991e6b6168772bc888450\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-544dd58775-5nxng" Sep 13 01:35:31.607398 kubelet[2688]: E0913 01:35:31.606137 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-544dd58775-5nxng_calico-system(e8755f74-5b78-4c9e-aa5f-ad0f5cad8960)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-544dd58775-5nxng_calico-system(e8755f74-5b78-4c9e-aa5f-ad0f5cad8960)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"292dafe03472e7f18f72ce1ad965a65059f68326ea9991e6b6168772bc888450\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-544dd58775-5nxng" podUID="e8755f74-5b78-4c9e-aa5f-ad0f5cad8960" Sep 13 01:35:31.609453 env[1589]: time="2025-09-13T01:35:31.608847788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vpflq,Uid:77870db6-b52e-4395-a518-9c1b7d66eb0e,Namespace:calico-system,Attempt:0,}" Sep 13 01:35:31.646262 env[1589]: time="2025-09-13T01:35:31.646144468Z" level=error msg="Failed to destroy network for sandbox \"344f38d5a059f7e4592718e4e7ba8a28daaf8fa0cd9a811330fa9878a5a7b7b0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:35:31.646571 env[1589]: time="2025-09-13T01:35:31.646532668Z" level=error msg="encountered an error cleaning up failed sandbox \"344f38d5a059f7e4592718e4e7ba8a28daaf8fa0cd9a811330fa9878a5a7b7b0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:35:31.646626 env[1589]: time="2025-09-13T01:35:31.646580148Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7rjjf,Uid:18937242-9ba8-46ed-bd00-32bfe4ab9056,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"344f38d5a059f7e4592718e4e7ba8a28daaf8fa0cd9a811330fa9878a5a7b7b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:35:31.647774 kubelet[2688]: E0913 01:35:31.646780 2688 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"344f38d5a059f7e4592718e4e7ba8a28daaf8fa0cd9a811330fa9878a5a7b7b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:35:31.647774 kubelet[2688]: E0913 01:35:31.646837 2688 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"344f38d5a059f7e4592718e4e7ba8a28daaf8fa0cd9a811330fa9878a5a7b7b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-7rjjf" Sep 13 01:35:31.647774 kubelet[2688]: E0913 01:35:31.646858 2688 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"344f38d5a059f7e4592718e4e7ba8a28daaf8fa0cd9a811330fa9878a5a7b7b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-7rjjf" Sep 13 01:35:31.647934 kubelet[2688]: E0913 01:35:31.646912 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-7rjjf_kube-system(18937242-9ba8-46ed-bd00-32bfe4ab9056)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-7rjjf_kube-system(18937242-9ba8-46ed-bd00-32bfe4ab9056)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"344f38d5a059f7e4592718e4e7ba8a28daaf8fa0cd9a811330fa9878a5a7b7b0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-7rjjf" podUID="18937242-9ba8-46ed-bd00-32bfe4ab9056" Sep 13 01:35:31.651721 env[1589]: time="2025-09-13T01:35:31.651674542Z" level=error msg="Failed to destroy network for sandbox \"e937a3a4e8ae545f5505b58e90df5c48b4537a09f9e7645597c293713bfc65bc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:35:31.652066 env[1589]: time="2025-09-13T01:35:31.652024342Z" level=error msg="encountered an error cleaning up failed sandbox \"e937a3a4e8ae545f5505b58e90df5c48b4537a09f9e7645597c293713bfc65bc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:35:31.652132 env[1589]: time="2025-09-13T01:35:31.652072302Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-2qlzv,Uid:e1b67e48-bb3b-4ff4-9ed0-8953d7a15f03,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e937a3a4e8ae545f5505b58e90df5c48b4537a09f9e7645597c293713bfc65bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:35:31.652488 kubelet[2688]: E0913 01:35:31.652333 2688 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e937a3a4e8ae545f5505b58e90df5c48b4537a09f9e7645597c293713bfc65bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:35:31.652488 kubelet[2688]: E0913 01:35:31.652384 2688 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e937a3a4e8ae545f5505b58e90df5c48b4537a09f9e7645597c293713bfc65bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-2qlzv" Sep 13 01:35:31.652488 kubelet[2688]: E0913 01:35:31.652400 2688 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e937a3a4e8ae545f5505b58e90df5c48b4537a09f9e7645597c293713bfc65bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-2qlzv" Sep 13 01:35:31.652608 kubelet[2688]: E0913 01:35:31.652439 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-2qlzv_calico-system(e1b67e48-bb3b-4ff4-9ed0-8953d7a15f03)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-2qlzv_calico-system(e1b67e48-bb3b-4ff4-9ed0-8953d7a15f03)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e937a3a4e8ae545f5505b58e90df5c48b4537a09f9e7645597c293713bfc65bc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-2qlzv" podUID="e1b67e48-bb3b-4ff4-9ed0-8953d7a15f03" Sep 13 01:35:31.654212 env[1589]: time="2025-09-13T01:35:31.654098580Z" level=error msg="Failed to destroy network for sandbox \"b07d8d5fac065df8906204b4ac57ed114b4c9c0be4ad0bffbba4566fa4449be1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:35:31.655345 env[1589]: time="2025-09-13T01:35:31.655289578Z" level=error msg="encountered an error cleaning up failed sandbox \"b07d8d5fac065df8906204b4ac57ed114b4c9c0be4ad0bffbba4566fa4449be1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:35:31.655481 env[1589]: time="2025-09-13T01:35:31.655453578Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84d8c5c799-tjhw6,Uid:ccc7d775-178b-498c-91bf-f43ba47754ca,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b07d8d5fac065df8906204b4ac57ed114b4c9c0be4ad0bffbba4566fa4449be1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:35:31.655839 kubelet[2688]: E0913 01:35:31.655704 2688 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b07d8d5fac065df8906204b4ac57ed114b4c9c0be4ad0bffbba4566fa4449be1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:35:31.655839 kubelet[2688]: E0913 01:35:31.655748 2688 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b07d8d5fac065df8906204b4ac57ed114b4c9c0be4ad0bffbba4566fa4449be1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84d8c5c799-tjhw6" Sep 13 01:35:31.655839 kubelet[2688]: E0913 01:35:31.655763 2688 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b07d8d5fac065df8906204b4ac57ed114b4c9c0be4ad0bffbba4566fa4449be1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84d8c5c799-tjhw6" Sep 13 01:35:31.656079 kubelet[2688]: E0913 01:35:31.655795 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-84d8c5c799-tjhw6_calico-system(ccc7d775-178b-498c-91bf-f43ba47754ca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-84d8c5c799-tjhw6_calico-system(ccc7d775-178b-498c-91bf-f43ba47754ca)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b07d8d5fac065df8906204b4ac57ed114b4c9c0be4ad0bffbba4566fa4449be1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-84d8c5c799-tjhw6" podUID="ccc7d775-178b-498c-91bf-f43ba47754ca" Sep 13 01:35:31.668337 env[1589]: time="2025-09-13T01:35:31.668282564Z" level=error msg="Failed to destroy network for sandbox \"f4d24476210993c3403cd8ff76dfe52d10ecb5b01f473c4ba33b9328b2701a33\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:35:31.668693 env[1589]: time="2025-09-13T01:35:31.668657084Z" level=error msg="encountered an error cleaning up failed sandbox \"f4d24476210993c3403cd8ff76dfe52d10ecb5b01f473c4ba33b9328b2701a33\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:35:31.668734 env[1589]: time="2025-09-13T01:35:31.668707644Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-948d44db6-dx4zf,Uid:42ab5820-ae33-4608-be70-9aaefef7b587,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f4d24476210993c3403cd8ff76dfe52d10ecb5b01f473c4ba33b9328b2701a33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:35:31.669335 kubelet[2688]: E0913 01:35:31.668956 2688 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4d24476210993c3403cd8ff76dfe52d10ecb5b01f473c4ba33b9328b2701a33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:35:31.669335 kubelet[2688]: E0913 01:35:31.669015 2688 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4d24476210993c3403cd8ff76dfe52d10ecb5b01f473c4ba33b9328b2701a33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-948d44db6-dx4zf" Sep 13 01:35:31.669335 kubelet[2688]: E0913 01:35:31.669038 2688 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4d24476210993c3403cd8ff76dfe52d10ecb5b01f473c4ba33b9328b2701a33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-948d44db6-dx4zf" Sep 13 01:35:31.669478 kubelet[2688]: E0913 01:35:31.669076 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-948d44db6-dx4zf_calico-apiserver(42ab5820-ae33-4608-be70-9aaefef7b587)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-948d44db6-dx4zf_calico-apiserver(42ab5820-ae33-4608-be70-9aaefef7b587)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f4d24476210993c3403cd8ff76dfe52d10ecb5b01f473c4ba33b9328b2701a33\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-948d44db6-dx4zf" podUID="42ab5820-ae33-4608-be70-9aaefef7b587" Sep 13 01:35:31.702387 env[1589]: time="2025-09-13T01:35:31.702322528Z" level=error msg="Failed to destroy network for sandbox \"3ada6d813106c69830ad8c1fd9059090e1365bc7e78192a38fc967fc1cbf08be\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:35:31.702713 env[1589]: time="2025-09-13T01:35:31.702677928Z" level=error msg="encountered an error cleaning up failed sandbox \"3ada6d813106c69830ad8c1fd9059090e1365bc7e78192a38fc967fc1cbf08be\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:35:31.702749 env[1589]: time="2025-09-13T01:35:31.702729248Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vpflq,Uid:77870db6-b52e-4395-a518-9c1b7d66eb0e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3ada6d813106c69830ad8c1fd9059090e1365bc7e78192a38fc967fc1cbf08be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:35:31.703373 kubelet[2688]: E0913 01:35:31.702958 2688 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ada6d813106c69830ad8c1fd9059090e1365bc7e78192a38fc967fc1cbf08be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:35:31.703373 kubelet[2688]: E0913 01:35:31.703024 2688 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ada6d813106c69830ad8c1fd9059090e1365bc7e78192a38fc967fc1cbf08be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vpflq" Sep 13 01:35:31.703373 kubelet[2688]: E0913 01:35:31.703045 2688 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ada6d813106c69830ad8c1fd9059090e1365bc7e78192a38fc967fc1cbf08be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vpflq" Sep 13 01:35:31.703565 kubelet[2688]: E0913 01:35:31.703106 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-vpflq_calico-system(77870db6-b52e-4395-a518-9c1b7d66eb0e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-vpflq_calico-system(77870db6-b52e-4395-a518-9c1b7d66eb0e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3ada6d813106c69830ad8c1fd9059090e1365bc7e78192a38fc967fc1cbf08be\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vpflq" podUID="77870db6-b52e-4395-a518-9c1b7d66eb0e" Sep 13 01:35:31.900650 kubelet[2688]: I0913 01:35:31.899959 2688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a8a6757ecb67bf01ea0b9f30ef8745b147d91eb7c504386679114bae63fa6c4" Sep 13 01:35:31.901899 env[1589]: time="2025-09-13T01:35:31.901860075Z" level=info msg="StopPodSandbox for \"7a8a6757ecb67bf01ea0b9f30ef8745b147d91eb7c504386679114bae63fa6c4\"" Sep 13 01:35:31.904322 kubelet[2688]: I0913 01:35:31.904302 2688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b07d8d5fac065df8906204b4ac57ed114b4c9c0be4ad0bffbba4566fa4449be1" Sep 13 01:35:31.905058 env[1589]: time="2025-09-13T01:35:31.905020672Z" level=info msg="StopPodSandbox for \"b07d8d5fac065df8906204b4ac57ed114b4c9c0be4ad0bffbba4566fa4449be1\"" Sep 13 01:35:31.907937 kubelet[2688]: I0913 01:35:31.907906 2688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa87d586d2429d57663c418daf2c06b2229a62b7799ff27d3300900147b420a7" Sep 13 01:35:31.909075 env[1589]: time="2025-09-13T01:35:31.909045268Z" level=info msg="StopPodSandbox for \"aa87d586d2429d57663c418daf2c06b2229a62b7799ff27d3300900147b420a7\"" Sep 13 01:35:31.927967 env[1589]: time="2025-09-13T01:35:31.927353488Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 13 01:35:31.928813 kubelet[2688]: I0913 01:35:31.928763 2688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="292dafe03472e7f18f72ce1ad965a65059f68326ea9991e6b6168772bc888450" Sep 13 01:35:31.929502 env[1589]: time="2025-09-13T01:35:31.929301126Z" level=info msg="StopPodSandbox for \"292dafe03472e7f18f72ce1ad965a65059f68326ea9991e6b6168772bc888450\"" Sep 13 01:35:31.931166 kubelet[2688]: I0913 01:35:31.930804 2688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4d24476210993c3403cd8ff76dfe52d10ecb5b01f473c4ba33b9328b2701a33" Sep 13 01:35:31.931440 env[1589]: time="2025-09-13T01:35:31.931413124Z" level=info msg="StopPodSandbox for \"f4d24476210993c3403cd8ff76dfe52d10ecb5b01f473c4ba33b9328b2701a33\"" Sep 13 01:35:31.941099 kubelet[2688]: I0913 01:35:31.940551 2688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74e77a1cea79225691f9048a3084a72482df9836e108942842c2fec9c46b29b9" Sep 13 01:35:31.941412 env[1589]: time="2025-09-13T01:35:31.941371353Z" level=info msg="StopPodSandbox for \"74e77a1cea79225691f9048a3084a72482df9836e108942842c2fec9c46b29b9\"" Sep 13 01:35:31.945408 kubelet[2688]: I0913 01:35:31.945124 2688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="344f38d5a059f7e4592718e4e7ba8a28daaf8fa0cd9a811330fa9878a5a7b7b0" Sep 13 01:35:31.947001 env[1589]: time="2025-09-13T01:35:31.946048148Z" level=info msg="StopPodSandbox for \"344f38d5a059f7e4592718e4e7ba8a28daaf8fa0cd9a811330fa9878a5a7b7b0\"" Sep 13 01:35:31.947317 kubelet[2688]: I0913 01:35:31.947285 2688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ada6d813106c69830ad8c1fd9059090e1365bc7e78192a38fc967fc1cbf08be" Sep 13 01:35:31.948062 env[1589]: time="2025-09-13T01:35:31.948026186Z" level=info msg="StopPodSandbox for \"3ada6d813106c69830ad8c1fd9059090e1365bc7e78192a38fc967fc1cbf08be\"" Sep 13 01:35:31.950578 kubelet[2688]: I0913 01:35:31.950554 2688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e937a3a4e8ae545f5505b58e90df5c48b4537a09f9e7645597c293713bfc65bc" Sep 13 01:35:31.951298 env[1589]: time="2025-09-13T01:35:31.951261743Z" level=info msg="StopPodSandbox for \"e937a3a4e8ae545f5505b58e90df5c48b4537a09f9e7645597c293713bfc65bc\"" Sep 13 01:35:31.979936 env[1589]: time="2025-09-13T01:35:31.979867632Z" level=error msg="StopPodSandbox for \"7a8a6757ecb67bf01ea0b9f30ef8745b147d91eb7c504386679114bae63fa6c4\" failed" error="failed to destroy network for sandbox \"7a8a6757ecb67bf01ea0b9f30ef8745b147d91eb7c504386679114bae63fa6c4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:35:31.980462 kubelet[2688]: E0913 01:35:31.980253 2688 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7a8a6757ecb67bf01ea0b9f30ef8745b147d91eb7c504386679114bae63fa6c4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7a8a6757ecb67bf01ea0b9f30ef8745b147d91eb7c504386679114bae63fa6c4" Sep 13 01:35:31.980462 kubelet[2688]: E0913 01:35:31.980325 2688 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7a8a6757ecb67bf01ea0b9f30ef8745b147d91eb7c504386679114bae63fa6c4"} Sep 13 01:35:31.980462 kubelet[2688]: E0913 01:35:31.980392 2688 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4867af55-bd3a-4712-8e0f-bf048a45159f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a8a6757ecb67bf01ea0b9f30ef8745b147d91eb7c504386679114bae63fa6c4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 01:35:31.980462 kubelet[2688]: E0913 01:35:31.980416 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4867af55-bd3a-4712-8e0f-bf048a45159f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a8a6757ecb67bf01ea0b9f30ef8745b147d91eb7c504386679114bae63fa6c4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-jxjcc" podUID="4867af55-bd3a-4712-8e0f-bf048a45159f" Sep 13 01:35:31.995509 env[1589]: time="2025-09-13T01:35:31.995421856Z" level=error msg="StopPodSandbox for \"b07d8d5fac065df8906204b4ac57ed114b4c9c0be4ad0bffbba4566fa4449be1\" failed" error="failed to destroy network for sandbox \"b07d8d5fac065df8906204b4ac57ed114b4c9c0be4ad0bffbba4566fa4449be1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:35:31.996012 kubelet[2688]: E0913 01:35:31.995780 2688 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b07d8d5fac065df8906204b4ac57ed114b4c9c0be4ad0bffbba4566fa4449be1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b07d8d5fac065df8906204b4ac57ed114b4c9c0be4ad0bffbba4566fa4449be1" Sep 13 01:35:31.996012 kubelet[2688]: E0913 01:35:31.995868 2688 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b07d8d5fac065df8906204b4ac57ed114b4c9c0be4ad0bffbba4566fa4449be1"} Sep 13 01:35:31.996012 kubelet[2688]: E0913 01:35:31.995910 2688 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ccc7d775-178b-498c-91bf-f43ba47754ca\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b07d8d5fac065df8906204b4ac57ed114b4c9c0be4ad0bffbba4566fa4449be1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 01:35:31.996012 kubelet[2688]: E0913 01:35:31.995957 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ccc7d775-178b-498c-91bf-f43ba47754ca\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b07d8d5fac065df8906204b4ac57ed114b4c9c0be4ad0bffbba4566fa4449be1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-84d8c5c799-tjhw6" podUID="ccc7d775-178b-498c-91bf-f43ba47754ca" Sep 13 01:35:32.039261 env[1589]: time="2025-09-13T01:35:32.039146530Z" level=error msg="StopPodSandbox for \"aa87d586d2429d57663c418daf2c06b2229a62b7799ff27d3300900147b420a7\" failed" error="failed to destroy network for sandbox \"aa87d586d2429d57663c418daf2c06b2229a62b7799ff27d3300900147b420a7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:35:32.039699 kubelet[2688]: E0913 01:35:32.039656 2688 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"aa87d586d2429d57663c418daf2c06b2229a62b7799ff27d3300900147b420a7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="aa87d586d2429d57663c418daf2c06b2229a62b7799ff27d3300900147b420a7" Sep 13 01:35:32.039791 kubelet[2688]: E0913 01:35:32.039709 2688 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"aa87d586d2429d57663c418daf2c06b2229a62b7799ff27d3300900147b420a7"} Sep 13 01:35:32.039791 kubelet[2688]: E0913 01:35:32.039744 2688 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7bfd1f1e-041a-4f50-ba7b-87c117566d38\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aa87d586d2429d57663c418daf2c06b2229a62b7799ff27d3300900147b420a7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 01:35:32.039791 kubelet[2688]: E0913 01:35:32.039768 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7bfd1f1e-041a-4f50-ba7b-87c117566d38\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aa87d586d2429d57663c418daf2c06b2229a62b7799ff27d3300900147b420a7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-948d44db6-dqp2g" podUID="7bfd1f1e-041a-4f50-ba7b-87c117566d38" Sep 13 01:35:32.078262 env[1589]: time="2025-09-13T01:35:32.078016249Z" level=error msg="StopPodSandbox for \"344f38d5a059f7e4592718e4e7ba8a28daaf8fa0cd9a811330fa9878a5a7b7b0\" failed" error="failed to destroy network for sandbox \"344f38d5a059f7e4592718e4e7ba8a28daaf8fa0cd9a811330fa9878a5a7b7b0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:35:32.079164 env[1589]: time="2025-09-13T01:35:32.079005568Z" level=error msg="StopPodSandbox for \"f4d24476210993c3403cd8ff76dfe52d10ecb5b01f473c4ba33b9328b2701a33\" failed" error="failed to destroy network for sandbox \"f4d24476210993c3403cd8ff76dfe52d10ecb5b01f473c4ba33b9328b2701a33\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:35:32.079663 kubelet[2688]: E0913 01:35:32.079401 2688 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"344f38d5a059f7e4592718e4e7ba8a28daaf8fa0cd9a811330fa9878a5a7b7b0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="344f38d5a059f7e4592718e4e7ba8a28daaf8fa0cd9a811330fa9878a5a7b7b0" Sep 13 01:35:32.079663 kubelet[2688]: E0913 01:35:32.079398 2688 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f4d24476210993c3403cd8ff76dfe52d10ecb5b01f473c4ba33b9328b2701a33\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f4d24476210993c3403cd8ff76dfe52d10ecb5b01f473c4ba33b9328b2701a33" Sep 13 01:35:32.079663 kubelet[2688]: E0913 01:35:32.079489 2688 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f4d24476210993c3403cd8ff76dfe52d10ecb5b01f473c4ba33b9328b2701a33"} Sep 13 01:35:32.079663 kubelet[2688]: E0913 01:35:32.079520 2688 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"42ab5820-ae33-4608-be70-9aaefef7b587\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f4d24476210993c3403cd8ff76dfe52d10ecb5b01f473c4ba33b9328b2701a33\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 01:35:32.079911 kubelet[2688]: E0913 01:35:32.079543 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"42ab5820-ae33-4608-be70-9aaefef7b587\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f4d24476210993c3403cd8ff76dfe52d10ecb5b01f473c4ba33b9328b2701a33\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-948d44db6-dx4zf" podUID="42ab5820-ae33-4608-be70-9aaefef7b587" Sep 13 01:35:32.079911 kubelet[2688]: E0913 01:35:32.079453 2688 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"344f38d5a059f7e4592718e4e7ba8a28daaf8fa0cd9a811330fa9878a5a7b7b0"} Sep 13 01:35:32.079911 kubelet[2688]: E0913 01:35:32.079597 2688 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"18937242-9ba8-46ed-bd00-32bfe4ab9056\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"344f38d5a059f7e4592718e4e7ba8a28daaf8fa0cd9a811330fa9878a5a7b7b0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 01:35:32.079911 kubelet[2688]: E0913 01:35:32.079613 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"18937242-9ba8-46ed-bd00-32bfe4ab9056\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"344f38d5a059f7e4592718e4e7ba8a28daaf8fa0cd9a811330fa9878a5a7b7b0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-7rjjf" podUID="18937242-9ba8-46ed-bd00-32bfe4ab9056" Sep 13 01:35:32.080896 env[1589]: time="2025-09-13T01:35:32.080844246Z" level=error msg="StopPodSandbox for \"74e77a1cea79225691f9048a3084a72482df9836e108942842c2fec9c46b29b9\" failed" error="failed to destroy network for sandbox \"74e77a1cea79225691f9048a3084a72482df9836e108942842c2fec9c46b29b9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:35:32.081076 kubelet[2688]: E0913 01:35:32.081041 2688 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"74e77a1cea79225691f9048a3084a72482df9836e108942842c2fec9c46b29b9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="74e77a1cea79225691f9048a3084a72482df9836e108942842c2fec9c46b29b9" Sep 13 01:35:32.081120 kubelet[2688]: E0913 01:35:32.081084 2688 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"74e77a1cea79225691f9048a3084a72482df9836e108942842c2fec9c46b29b9"} Sep 13 01:35:32.081120 kubelet[2688]: E0913 01:35:32.081111 2688 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3671d0d8-9f53-4f15-828c-de3c99528bf3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"74e77a1cea79225691f9048a3084a72482df9836e108942842c2fec9c46b29b9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 01:35:32.081247 kubelet[2688]: E0913 01:35:32.081130 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3671d0d8-9f53-4f15-828c-de3c99528bf3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"74e77a1cea79225691f9048a3084a72482df9836e108942842c2fec9c46b29b9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-cb6df9659-4gj4b" podUID="3671d0d8-9f53-4f15-828c-de3c99528bf3" Sep 13 01:35:32.084070 env[1589]: time="2025-09-13T01:35:32.084029283Z" level=error msg="StopPodSandbox for \"292dafe03472e7f18f72ce1ad965a65059f68326ea9991e6b6168772bc888450\" failed" error="failed to destroy network for sandbox \"292dafe03472e7f18f72ce1ad965a65059f68326ea9991e6b6168772bc888450\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:35:32.084478 kubelet[2688]: E0913 01:35:32.084355 2688 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"292dafe03472e7f18f72ce1ad965a65059f68326ea9991e6b6168772bc888450\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="292dafe03472e7f18f72ce1ad965a65059f68326ea9991e6b6168772bc888450" Sep 13 01:35:32.084478 kubelet[2688]: E0913 01:35:32.084392 2688 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"292dafe03472e7f18f72ce1ad965a65059f68326ea9991e6b6168772bc888450"} Sep 13 01:35:32.084478 kubelet[2688]: E0913 01:35:32.084432 2688 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e8755f74-5b78-4c9e-aa5f-ad0f5cad8960\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"292dafe03472e7f18f72ce1ad965a65059f68326ea9991e6b6168772bc888450\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 01:35:32.084478 kubelet[2688]: E0913 01:35:32.084453 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e8755f74-5b78-4c9e-aa5f-ad0f5cad8960\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"292dafe03472e7f18f72ce1ad965a65059f68326ea9991e6b6168772bc888450\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-544dd58775-5nxng" podUID="e8755f74-5b78-4c9e-aa5f-ad0f5cad8960" Sep 13 01:35:32.092562 env[1589]: time="2025-09-13T01:35:32.092512674Z" level=error msg="StopPodSandbox for \"e937a3a4e8ae545f5505b58e90df5c48b4537a09f9e7645597c293713bfc65bc\" failed" error="failed to destroy network for sandbox \"e937a3a4e8ae545f5505b58e90df5c48b4537a09f9e7645597c293713bfc65bc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:35:32.092742 kubelet[2688]: E0913 01:35:32.092705 2688 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e937a3a4e8ae545f5505b58e90df5c48b4537a09f9e7645597c293713bfc65bc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e937a3a4e8ae545f5505b58e90df5c48b4537a09f9e7645597c293713bfc65bc" Sep 13 01:35:32.092804 kubelet[2688]: E0913 01:35:32.092751 2688 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e937a3a4e8ae545f5505b58e90df5c48b4537a09f9e7645597c293713bfc65bc"} Sep 13 01:35:32.092804 kubelet[2688]: E0913 01:35:32.092779 2688 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e1b67e48-bb3b-4ff4-9ed0-8953d7a15f03\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e937a3a4e8ae545f5505b58e90df5c48b4537a09f9e7645597c293713bfc65bc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 01:35:32.092878 kubelet[2688]: E0913 01:35:32.092804 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e1b67e48-bb3b-4ff4-9ed0-8953d7a15f03\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e937a3a4e8ae545f5505b58e90df5c48b4537a09f9e7645597c293713bfc65bc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-2qlzv" podUID="e1b67e48-bb3b-4ff4-9ed0-8953d7a15f03" Sep 13 01:35:32.096627 env[1589]: time="2025-09-13T01:35:32.096575710Z" level=error msg="StopPodSandbox for \"3ada6d813106c69830ad8c1fd9059090e1365bc7e78192a38fc967fc1cbf08be\" failed" error="failed to destroy network for sandbox \"3ada6d813106c69830ad8c1fd9059090e1365bc7e78192a38fc967fc1cbf08be\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:35:32.096874 kubelet[2688]: E0913 01:35:32.096840 2688 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3ada6d813106c69830ad8c1fd9059090e1365bc7e78192a38fc967fc1cbf08be\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3ada6d813106c69830ad8c1fd9059090e1365bc7e78192a38fc967fc1cbf08be" Sep 13 01:35:32.096939 kubelet[2688]: E0913 01:35:32.096883 2688 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3ada6d813106c69830ad8c1fd9059090e1365bc7e78192a38fc967fc1cbf08be"} Sep 13 01:35:32.096939 kubelet[2688]: E0913 01:35:32.096914 2688 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"77870db6-b52e-4395-a518-9c1b7d66eb0e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3ada6d813106c69830ad8c1fd9059090e1365bc7e78192a38fc967fc1cbf08be\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 01:35:32.097024 kubelet[2688]: E0913 01:35:32.096932 2688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"77870db6-b52e-4395-a518-9c1b7d66eb0e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3ada6d813106c69830ad8c1fd9059090e1365bc7e78192a38fc967fc1cbf08be\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vpflq" podUID="77870db6-b52e-4395-a518-9c1b7d66eb0e" Sep 13 01:35:32.277842 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-292dafe03472e7f18f72ce1ad965a65059f68326ea9991e6b6168772bc888450-shm.mount: Deactivated successfully. Sep 13 01:35:32.277982 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7a8a6757ecb67bf01ea0b9f30ef8745b147d91eb7c504386679114bae63fa6c4-shm.mount: Deactivated successfully. Sep 13 01:35:32.278061 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-aa87d586d2429d57663c418daf2c06b2229a62b7799ff27d3300900147b420a7-shm.mount: Deactivated successfully. Sep 13 01:35:38.857133 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4178703947.mount: Deactivated successfully. Sep 13 01:35:38.949137 env[1589]: time="2025-09-13T01:35:38.949090914Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:35:38.957069 env[1589]: time="2025-09-13T01:35:38.957025347Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:35:38.962636 env[1589]: time="2025-09-13T01:35:38.962606421Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:35:38.971662 env[1589]: time="2025-09-13T01:35:38.971617733Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:35:38.972608 env[1589]: time="2025-09-13T01:35:38.972172012Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 13 01:35:38.987740 env[1589]: time="2025-09-13T01:35:38.987582398Z" level=info msg="CreateContainer within sandbox \"239dfc2437f367972d06a5b4f9116e68d8c355bdb5a45cdbf2886ee435d3f0cb\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 13 01:35:39.026265 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3132390665.mount: Deactivated successfully. Sep 13 01:35:39.053966 env[1589]: time="2025-09-13T01:35:39.053907216Z" level=info msg="CreateContainer within sandbox \"239dfc2437f367972d06a5b4f9116e68d8c355bdb5a45cdbf2886ee435d3f0cb\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b9419d5cd915862421d9c0e7b507fa6c072e0ade50153f249368ac4ec61c1e1b\"" Sep 13 01:35:39.056174 env[1589]: time="2025-09-13T01:35:39.056126814Z" level=info msg="StartContainer for \"b9419d5cd915862421d9c0e7b507fa6c072e0ade50153f249368ac4ec61c1e1b\"" Sep 13 01:35:39.110395 env[1589]: time="2025-09-13T01:35:39.110286204Z" level=info msg="StartContainer for \"b9419d5cd915862421d9c0e7b507fa6c072e0ade50153f249368ac4ec61c1e1b\" returns successfully" Sep 13 01:35:39.513029 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 13 01:35:39.513162 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 13 01:35:39.636972 env[1589]: time="2025-09-13T01:35:39.636936597Z" level=info msg="StopPodSandbox for \"292dafe03472e7f18f72ce1ad965a65059f68326ea9991e6b6168772bc888450\"" Sep 13 01:35:39.756449 env[1589]: 2025-09-13 01:35:39.714 [INFO][3921] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="292dafe03472e7f18f72ce1ad965a65059f68326ea9991e6b6168772bc888450" Sep 13 01:35:39.756449 env[1589]: 2025-09-13 01:35:39.715 [INFO][3921] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="292dafe03472e7f18f72ce1ad965a65059f68326ea9991e6b6168772bc888450" iface="eth0" netns="/var/run/netns/cni-072f6faf-8862-931d-0229-007f2b4822cd" Sep 13 01:35:39.756449 env[1589]: 2025-09-13 01:35:39.715 [INFO][3921] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="292dafe03472e7f18f72ce1ad965a65059f68326ea9991e6b6168772bc888450" iface="eth0" netns="/var/run/netns/cni-072f6faf-8862-931d-0229-007f2b4822cd" Sep 13 01:35:39.756449 env[1589]: 2025-09-13 01:35:39.715 [INFO][3921] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="292dafe03472e7f18f72ce1ad965a65059f68326ea9991e6b6168772bc888450" iface="eth0" netns="/var/run/netns/cni-072f6faf-8862-931d-0229-007f2b4822cd" Sep 13 01:35:39.756449 env[1589]: 2025-09-13 01:35:39.715 [INFO][3921] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="292dafe03472e7f18f72ce1ad965a65059f68326ea9991e6b6168772bc888450" Sep 13 01:35:39.756449 env[1589]: 2025-09-13 01:35:39.715 [INFO][3921] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="292dafe03472e7f18f72ce1ad965a65059f68326ea9991e6b6168772bc888450" Sep 13 01:35:39.756449 env[1589]: 2025-09-13 01:35:39.742 [INFO][3933] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="292dafe03472e7f18f72ce1ad965a65059f68326ea9991e6b6168772bc888450" HandleID="k8s-pod-network.292dafe03472e7f18f72ce1ad965a65059f68326ea9991e6b6168772bc888450" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-whisker--544dd58775--5nxng-eth0" Sep 13 01:35:39.756449 env[1589]: 2025-09-13 01:35:39.742 [INFO][3933] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:35:39.756449 env[1589]: 2025-09-13 01:35:39.742 [INFO][3933] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:35:39.756449 env[1589]: 2025-09-13 01:35:39.751 [WARNING][3933] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="292dafe03472e7f18f72ce1ad965a65059f68326ea9991e6b6168772bc888450" HandleID="k8s-pod-network.292dafe03472e7f18f72ce1ad965a65059f68326ea9991e6b6168772bc888450" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-whisker--544dd58775--5nxng-eth0" Sep 13 01:35:39.756449 env[1589]: 2025-09-13 01:35:39.751 [INFO][3933] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="292dafe03472e7f18f72ce1ad965a65059f68326ea9991e6b6168772bc888450" HandleID="k8s-pod-network.292dafe03472e7f18f72ce1ad965a65059f68326ea9991e6b6168772bc888450" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-whisker--544dd58775--5nxng-eth0" Sep 13 01:35:39.756449 env[1589]: 2025-09-13 01:35:39.752 [INFO][3933] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:35:39.756449 env[1589]: 2025-09-13 01:35:39.754 [INFO][3921] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="292dafe03472e7f18f72ce1ad965a65059f68326ea9991e6b6168772bc888450" Sep 13 01:35:39.757024 env[1589]: time="2025-09-13T01:35:39.756580966Z" level=info msg="TearDown network for sandbox \"292dafe03472e7f18f72ce1ad965a65059f68326ea9991e6b6168772bc888450\" successfully" Sep 13 01:35:39.757024 env[1589]: time="2025-09-13T01:35:39.756623686Z" level=info msg="StopPodSandbox for \"292dafe03472e7f18f72ce1ad965a65059f68326ea9991e6b6168772bc888450\" returns successfully" Sep 13 01:35:39.858080 systemd[1]: run-netns-cni\x2d072f6faf\x2d8862\x2d931d\x2d0229\x2d007f2b4822cd.mount: Deactivated successfully. Sep 13 01:35:39.870857 kubelet[2688]: I0913 01:35:39.870808 2688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e8755f74-5b78-4c9e-aa5f-ad0f5cad8960-whisker-ca-bundle\") pod \"e8755f74-5b78-4c9e-aa5f-ad0f5cad8960\" (UID: \"e8755f74-5b78-4c9e-aa5f-ad0f5cad8960\") " Sep 13 01:35:39.871283 kubelet[2688]: I0913 01:35:39.870871 2688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmcmf\" (UniqueName: \"kubernetes.io/projected/e8755f74-5b78-4c9e-aa5f-ad0f5cad8960-kube-api-access-nmcmf\") pod \"e8755f74-5b78-4c9e-aa5f-ad0f5cad8960\" (UID: \"e8755f74-5b78-4c9e-aa5f-ad0f5cad8960\") " Sep 13 01:35:39.871283 kubelet[2688]: I0913 01:35:39.870899 2688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e8755f74-5b78-4c9e-aa5f-ad0f5cad8960-whisker-backend-key-pair\") pod \"e8755f74-5b78-4c9e-aa5f-ad0f5cad8960\" (UID: \"e8755f74-5b78-4c9e-aa5f-ad0f5cad8960\") " Sep 13 01:35:39.871618 kubelet[2688]: I0913 01:35:39.871589 2688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8755f74-5b78-4c9e-aa5f-ad0f5cad8960-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "e8755f74-5b78-4c9e-aa5f-ad0f5cad8960" (UID: "e8755f74-5b78-4c9e-aa5f-ad0f5cad8960"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 01:35:39.876085 systemd[1]: var-lib-kubelet-pods-e8755f74\x2d5b78\x2d4c9e\x2daa5f\x2dad0f5cad8960-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnmcmf.mount: Deactivated successfully. Sep 13 01:35:39.878840 systemd[1]: var-lib-kubelet-pods-e8755f74\x2d5b78\x2d4c9e\x2daa5f\x2dad0f5cad8960-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 13 01:35:39.880407 kubelet[2688]: I0913 01:35:39.880363 2688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8755f74-5b78-4c9e-aa5f-ad0f5cad8960-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "e8755f74-5b78-4c9e-aa5f-ad0f5cad8960" (UID: "e8755f74-5b78-4c9e-aa5f-ad0f5cad8960"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 01:35:39.880505 kubelet[2688]: I0913 01:35:39.880495 2688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8755f74-5b78-4c9e-aa5f-ad0f5cad8960-kube-api-access-nmcmf" (OuterVolumeSpecName: "kube-api-access-nmcmf") pod "e8755f74-5b78-4c9e-aa5f-ad0f5cad8960" (UID: "e8755f74-5b78-4c9e-aa5f-ad0f5cad8960"). InnerVolumeSpecName "kube-api-access-nmcmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 01:35:39.974376 kubelet[2688]: I0913 01:35:39.974340 2688 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e8755f74-5b78-4c9e-aa5f-ad0f5cad8960-whisker-ca-bundle\") on node \"ci-3510.3.8-n-9d226ffbbf\" DevicePath \"\"" Sep 13 01:35:39.974376 kubelet[2688]: I0913 01:35:39.974369 2688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmcmf\" (UniqueName: \"kubernetes.io/projected/e8755f74-5b78-4c9e-aa5f-ad0f5cad8960-kube-api-access-nmcmf\") on node \"ci-3510.3.8-n-9d226ffbbf\" DevicePath \"\"" Sep 13 01:35:39.974376 kubelet[2688]: I0913 01:35:39.974381 2688 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e8755f74-5b78-4c9e-aa5f-ad0f5cad8960-whisker-backend-key-pair\") on node \"ci-3510.3.8-n-9d226ffbbf\" DevicePath \"\"" Sep 13 01:35:40.014674 kubelet[2688]: I0913 01:35:40.014601 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-cc8fs" podStartSLOduration=2.793058495 podStartE2EDuration="21.014583928s" podCreationTimestamp="2025-09-13 01:35:19 +0000 UTC" firstStartedPulling="2025-09-13 01:35:20.751599259 +0000 UTC m=+22.252437491" lastFinishedPulling="2025-09-13 01:35:38.973124692 +0000 UTC m=+40.473962924" observedRunningTime="2025-09-13 01:35:39.994883866 +0000 UTC m=+41.495722098" watchObservedRunningTime="2025-09-13 01:35:40.014583928 +0000 UTC m=+41.515422160" Sep 13 01:35:40.176675 kubelet[2688]: I0913 01:35:40.176640 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b39347b7-5d84-4091-a088-39ef82c8be1a-whisker-backend-key-pair\") pod \"whisker-68bc4666b6-lvk7q\" (UID: \"b39347b7-5d84-4091-a088-39ef82c8be1a\") " pod="calico-system/whisker-68bc4666b6-lvk7q" Sep 13 01:35:40.176817 kubelet[2688]: I0913 01:35:40.176687 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dv76\" (UniqueName: \"kubernetes.io/projected/b39347b7-5d84-4091-a088-39ef82c8be1a-kube-api-access-5dv76\") pod \"whisker-68bc4666b6-lvk7q\" (UID: \"b39347b7-5d84-4091-a088-39ef82c8be1a\") " pod="calico-system/whisker-68bc4666b6-lvk7q" Sep 13 01:35:40.176817 kubelet[2688]: I0913 01:35:40.176711 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b39347b7-5d84-4091-a088-39ef82c8be1a-whisker-ca-bundle\") pod \"whisker-68bc4666b6-lvk7q\" (UID: \"b39347b7-5d84-4091-a088-39ef82c8be1a\") " pod="calico-system/whisker-68bc4666b6-lvk7q" Sep 13 01:35:40.371077 env[1589]: time="2025-09-13T01:35:40.370759284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-68bc4666b6-lvk7q,Uid:b39347b7-5d84-4091-a088-39ef82c8be1a,Namespace:calico-system,Attempt:0,}" Sep 13 01:35:40.584157 systemd-networkd[1771]: cali6bf91d84c19: Link UP Sep 13 01:35:40.599217 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 01:35:40.599320 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali6bf91d84c19: link becomes ready Sep 13 01:35:40.600590 systemd-networkd[1771]: cali6bf91d84c19: Gained carrier Sep 13 01:35:40.606503 kubelet[2688]: I0913 01:35:40.606459 2688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8755f74-5b78-4c9e-aa5f-ad0f5cad8960" path="/var/lib/kubelet/pods/e8755f74-5b78-4c9e-aa5f-ad0f5cad8960/volumes" Sep 13 01:35:40.619386 env[1589]: 2025-09-13 01:35:40.423 [INFO][3955] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 01:35:40.619386 env[1589]: 2025-09-13 01:35:40.446 [INFO][3955] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--9d226ffbbf-k8s-whisker--68bc4666b6--lvk7q-eth0 whisker-68bc4666b6- calico-system b39347b7-5d84-4091-a088-39ef82c8be1a 908 0 2025-09-13 01:35:40 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:68bc4666b6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-3510.3.8-n-9d226ffbbf whisker-68bc4666b6-lvk7q eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali6bf91d84c19 [] [] }} ContainerID="36ec86b7bd420f1578f3d659adcfce3eb8be084c214686b0fe60c605d7a2821f" Namespace="calico-system" Pod="whisker-68bc4666b6-lvk7q" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-whisker--68bc4666b6--lvk7q-" Sep 13 01:35:40.619386 env[1589]: 2025-09-13 01:35:40.446 [INFO][3955] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="36ec86b7bd420f1578f3d659adcfce3eb8be084c214686b0fe60c605d7a2821f" Namespace="calico-system" Pod="whisker-68bc4666b6-lvk7q" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-whisker--68bc4666b6--lvk7q-eth0" Sep 13 01:35:40.619386 env[1589]: 2025-09-13 01:35:40.511 [INFO][3967] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="36ec86b7bd420f1578f3d659adcfce3eb8be084c214686b0fe60c605d7a2821f" HandleID="k8s-pod-network.36ec86b7bd420f1578f3d659adcfce3eb8be084c214686b0fe60c605d7a2821f" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-whisker--68bc4666b6--lvk7q-eth0" Sep 13 01:35:40.619386 env[1589]: 2025-09-13 01:35:40.511 [INFO][3967] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="36ec86b7bd420f1578f3d659adcfce3eb8be084c214686b0fe60c605d7a2821f" HandleID="k8s-pod-network.36ec86b7bd420f1578f3d659adcfce3eb8be084c214686b0fe60c605d7a2821f" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-whisker--68bc4666b6--lvk7q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b730), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.8-n-9d226ffbbf", "pod":"whisker-68bc4666b6-lvk7q", "timestamp":"2025-09-13 01:35:40.511014637 +0000 UTC"}, Hostname:"ci-3510.3.8-n-9d226ffbbf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 01:35:40.619386 env[1589]: 2025-09-13 01:35:40.511 [INFO][3967] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:35:40.619386 env[1589]: 2025-09-13 01:35:40.511 [INFO][3967] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:35:40.619386 env[1589]: 2025-09-13 01:35:40.511 [INFO][3967] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-9d226ffbbf' Sep 13 01:35:40.619386 env[1589]: 2025-09-13 01:35:40.520 [INFO][3967] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.36ec86b7bd420f1578f3d659adcfce3eb8be084c214686b0fe60c605d7a2821f" host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:40.619386 env[1589]: 2025-09-13 01:35:40.524 [INFO][3967] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:40.619386 env[1589]: 2025-09-13 01:35:40.527 [INFO][3967] ipam/ipam.go 511: Trying affinity for 192.168.44.64/26 host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:40.619386 env[1589]: 2025-09-13 01:35:40.529 [INFO][3967] ipam/ipam.go 158: Attempting to load block cidr=192.168.44.64/26 host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:40.619386 env[1589]: 2025-09-13 01:35:40.531 [INFO][3967] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.44.64/26 host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:40.619386 env[1589]: 2025-09-13 01:35:40.531 [INFO][3967] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.44.64/26 handle="k8s-pod-network.36ec86b7bd420f1578f3d659adcfce3eb8be084c214686b0fe60c605d7a2821f" host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:40.619386 env[1589]: 2025-09-13 01:35:40.532 [INFO][3967] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.36ec86b7bd420f1578f3d659adcfce3eb8be084c214686b0fe60c605d7a2821f Sep 13 01:35:40.619386 env[1589]: 2025-09-13 01:35:40.537 [INFO][3967] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.44.64/26 handle="k8s-pod-network.36ec86b7bd420f1578f3d659adcfce3eb8be084c214686b0fe60c605d7a2821f" host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:40.619386 env[1589]: 2025-09-13 01:35:40.546 [INFO][3967] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.44.65/26] block=192.168.44.64/26 handle="k8s-pod-network.36ec86b7bd420f1578f3d659adcfce3eb8be084c214686b0fe60c605d7a2821f" host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:40.619386 env[1589]: 2025-09-13 01:35:40.546 [INFO][3967] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.44.65/26] handle="k8s-pod-network.36ec86b7bd420f1578f3d659adcfce3eb8be084c214686b0fe60c605d7a2821f" host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:40.619386 env[1589]: 2025-09-13 01:35:40.546 [INFO][3967] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:35:40.619386 env[1589]: 2025-09-13 01:35:40.546 [INFO][3967] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.44.65/26] IPv6=[] ContainerID="36ec86b7bd420f1578f3d659adcfce3eb8be084c214686b0fe60c605d7a2821f" HandleID="k8s-pod-network.36ec86b7bd420f1578f3d659adcfce3eb8be084c214686b0fe60c605d7a2821f" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-whisker--68bc4666b6--lvk7q-eth0" Sep 13 01:35:40.620218 env[1589]: 2025-09-13 01:35:40.548 [INFO][3955] cni-plugin/k8s.go 418: Populated endpoint ContainerID="36ec86b7bd420f1578f3d659adcfce3eb8be084c214686b0fe60c605d7a2821f" Namespace="calico-system" Pod="whisker-68bc4666b6-lvk7q" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-whisker--68bc4666b6--lvk7q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--9d226ffbbf-k8s-whisker--68bc4666b6--lvk7q-eth0", GenerateName:"whisker-68bc4666b6-", Namespace:"calico-system", SelfLink:"", UID:"b39347b7-5d84-4091-a088-39ef82c8be1a", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 1, 35, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"68bc4666b6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-9d226ffbbf", ContainerID:"", Pod:"whisker-68bc4666b6-lvk7q", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.44.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6bf91d84c19", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 01:35:40.620218 env[1589]: 2025-09-13 01:35:40.548 [INFO][3955] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.44.65/32] ContainerID="36ec86b7bd420f1578f3d659adcfce3eb8be084c214686b0fe60c605d7a2821f" Namespace="calico-system" Pod="whisker-68bc4666b6-lvk7q" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-whisker--68bc4666b6--lvk7q-eth0" Sep 13 01:35:40.620218 env[1589]: 2025-09-13 01:35:40.548 [INFO][3955] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6bf91d84c19 ContainerID="36ec86b7bd420f1578f3d659adcfce3eb8be084c214686b0fe60c605d7a2821f" Namespace="calico-system" Pod="whisker-68bc4666b6-lvk7q" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-whisker--68bc4666b6--lvk7q-eth0" Sep 13 01:35:40.620218 env[1589]: 2025-09-13 01:35:40.601 [INFO][3955] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="36ec86b7bd420f1578f3d659adcfce3eb8be084c214686b0fe60c605d7a2821f" Namespace="calico-system" Pod="whisker-68bc4666b6-lvk7q" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-whisker--68bc4666b6--lvk7q-eth0" Sep 13 01:35:40.620218 env[1589]: 2025-09-13 01:35:40.601 [INFO][3955] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="36ec86b7bd420f1578f3d659adcfce3eb8be084c214686b0fe60c605d7a2821f" Namespace="calico-system" Pod="whisker-68bc4666b6-lvk7q" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-whisker--68bc4666b6--lvk7q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--9d226ffbbf-k8s-whisker--68bc4666b6--lvk7q-eth0", GenerateName:"whisker-68bc4666b6-", Namespace:"calico-system", SelfLink:"", UID:"b39347b7-5d84-4091-a088-39ef82c8be1a", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 1, 35, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"68bc4666b6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-9d226ffbbf", ContainerID:"36ec86b7bd420f1578f3d659adcfce3eb8be084c214686b0fe60c605d7a2821f", Pod:"whisker-68bc4666b6-lvk7q", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.44.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6bf91d84c19", MAC:"16:55:7f:05:47:65", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 01:35:40.620218 env[1589]: 2025-09-13 01:35:40.614 [INFO][3955] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="36ec86b7bd420f1578f3d659adcfce3eb8be084c214686b0fe60c605d7a2821f" Namespace="calico-system" Pod="whisker-68bc4666b6-lvk7q" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-whisker--68bc4666b6--lvk7q-eth0" Sep 13 01:35:40.631569 env[1589]: time="2025-09-13T01:35:40.631484207Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:35:40.631719 env[1589]: time="2025-09-13T01:35:40.631572007Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:35:40.631719 env[1589]: time="2025-09-13T01:35:40.631608807Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:35:40.631912 env[1589]: time="2025-09-13T01:35:40.631848367Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/36ec86b7bd420f1578f3d659adcfce3eb8be084c214686b0fe60c605d7a2821f pid=3988 runtime=io.containerd.runc.v2 Sep 13 01:35:40.674397 env[1589]: time="2025-09-13T01:35:40.674356928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-68bc4666b6-lvk7q,Uid:b39347b7-5d84-4091-a088-39ef82c8be1a,Namespace:calico-system,Attempt:0,} returns sandbox id \"36ec86b7bd420f1578f3d659adcfce3eb8be084c214686b0fe60c605d7a2821f\"" Sep 13 01:35:40.677234 env[1589]: time="2025-09-13T01:35:40.676587046Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 13 01:35:40.926000 audit[4075]: AVC avc: denied { write } for pid=4075 comm="tee" name="fd" dev="proc" ino=25324 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 01:35:40.932000 audit[4072]: AVC avc: denied { write } for pid=4072 comm="tee" name="fd" dev="proc" ino=25722 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 01:35:40.977079 kernel: audit: type=1400 audit(1757727340.926:311): avc: denied { write } for pid=4075 comm="tee" name="fd" dev="proc" ino=25324 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 01:35:40.977177 kernel: audit: type=1400 audit(1757727340.932:312): avc: denied { write } for pid=4072 comm="tee" name="fd" dev="proc" ino=25722 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 01:35:40.932000 audit[4072]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffe6f827c0 a2=241 a3=1b6 items=1 ppid=4040 pid=4072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:40.932000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Sep 13 01:35:41.045620 kernel: audit: type=1300 audit(1757727340.932:312): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffe6f827c0 a2=241 a3=1b6 items=1 ppid=4040 pid=4072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:41.045699 kernel: audit: type=1307 audit(1757727340.932:312): cwd="/etc/service/enabled/node-status-reporter/log" Sep 13 01:35:40.932000 audit: PATH item=0 name="/dev/fd/63" inode=25712 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:35:41.065442 kernel: audit: type=1302 audit(1757727340.932:312): item=0 name="/dev/fd/63" inode=25712 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:35:40.932000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 01:35:41.083663 kernel: audit: type=1327 audit(1757727340.932:312): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 01:35:40.963000 audit[4093]: AVC avc: denied { write } for pid=4093 comm="tee" name="fd" dev="proc" ino=25740 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 01:35:40.963000 audit[4093]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff25a87cf a2=241 a3=1b6 items=1 ppid=4051 pid=4093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:41.103278 kernel: audit: type=1400 audit(1757727340.963:313): avc: denied { write } for pid=4093 comm="tee" name="fd" dev="proc" ino=25740 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 01:35:40.963000 audit: CWD cwd="/etc/service/enabled/confd/log" Sep 13 01:35:41.137740 kernel: audit: type=1300 audit(1757727340.963:313): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff25a87cf a2=241 a3=1b6 items=1 ppid=4051 pid=4093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:41.137812 kernel: audit: type=1307 audit(1757727340.963:313): cwd="/etc/service/enabled/confd/log" Sep 13 01:35:40.963000 audit: PATH item=0 name="/dev/fd/63" inode=25731 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:35:40.963000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 01:35:41.156196 kernel: audit: type=1302 audit(1757727340.963:313): item=0 name="/dev/fd/63" inode=25731 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:35:40.963000 audit[4095]: AVC avc: denied { write } for pid=4095 comm="tee" name="fd" dev="proc" ino=25746 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 01:35:40.963000 audit[4095]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffc93d47d1 a2=241 a3=1b6 items=1 ppid=4035 pid=4095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:40.963000 audit: CWD cwd="/etc/service/enabled/cni/log" Sep 13 01:35:40.963000 audit: PATH item=0 name="/dev/fd/63" inode=25329 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:35:40.963000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 01:35:40.977000 audit[4098]: AVC avc: denied { write } for pid=4098 comm="tee" name="fd" dev="proc" ino=25751 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 01:35:40.977000 audit[4098]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff51ee7d0 a2=241 a3=1b6 items=1 ppid=4037 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:40.977000 audit: CWD cwd="/etc/service/enabled/bird/log" Sep 13 01:35:40.977000 audit: PATH item=0 name="/dev/fd/63" inode=25332 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:35:40.977000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 01:35:40.926000 audit[4075]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff83be7cf a2=241 a3=1b6 items=1 ppid=4031 pid=4075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:40.926000 audit: CWD cwd="/etc/service/enabled/bird6/log" Sep 13 01:35:40.926000 audit: PATH item=0 name="/dev/fd/63" inode=25713 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:35:40.926000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 01:35:40.997000 audit[4103]: AVC avc: denied { write } for pid=4103 comm="tee" name="fd" dev="proc" ino=25755 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 01:35:40.997000 audit[4103]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffc2a3f7cf a2=241 a3=1b6 items=1 ppid=4045 pid=4103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:40.997000 audit: CWD cwd="/etc/service/enabled/felix/log" Sep 13 01:35:40.997000 audit: PATH item=0 name="/dev/fd/63" inode=25334 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:35:40.997000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 01:35:41.047000 audit[4106]: AVC avc: denied { write } for pid=4106 comm="tee" name="fd" dev="proc" ino=25339 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 01:35:41.047000 audit[4106]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffc20fb7bf a2=241 a3=1b6 items=1 ppid=4033 pid=4106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:41.047000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Sep 13 01:35:41.047000 audit: PATH item=0 name="/dev/fd/63" inode=25336 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:35:41.047000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 01:35:41.486000 audit[4141]: AVC avc: denied { bpf } for pid=4141 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.486000 audit[4141]: AVC avc: denied { bpf } for pid=4141 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.486000 audit[4141]: AVC avc: denied { perfmon } for pid=4141 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.486000 audit[4141]: AVC avc: denied { perfmon } for pid=4141 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.486000 audit[4141]: AVC avc: denied { perfmon } for pid=4141 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.486000 audit[4141]: AVC avc: denied { perfmon } for pid=4141 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.486000 audit[4141]: AVC avc: denied { perfmon } for pid=4141 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.486000 audit[4141]: AVC avc: denied { bpf } for pid=4141 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.486000 audit[4141]: AVC avc: denied { bpf } for pid=4141 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.486000 audit: BPF prog-id=10 op=LOAD Sep 13 01:35:41.486000 audit[4141]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffef09b8a8 a2=98 a3=ffffef09b898 items=0 ppid=4047 pid=4141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:41.486000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 13 01:35:41.487000 audit: BPF prog-id=10 op=UNLOAD Sep 13 01:35:41.487000 audit[4141]: AVC avc: denied { bpf } for pid=4141 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.487000 audit[4141]: AVC avc: denied { bpf } for pid=4141 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.487000 audit[4141]: AVC avc: denied { perfmon } for pid=4141 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.487000 audit[4141]: AVC avc: denied { perfmon } for pid=4141 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.487000 audit[4141]: AVC avc: denied { perfmon } for pid=4141 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.487000 audit[4141]: AVC avc: denied { perfmon } for pid=4141 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.487000 audit[4141]: AVC avc: denied { perfmon } for pid=4141 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.487000 audit[4141]: AVC avc: denied { bpf } for pid=4141 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.487000 audit[4141]: AVC avc: denied { bpf } for pid=4141 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.487000 audit: BPF prog-id=11 op=LOAD Sep 13 01:35:41.487000 audit[4141]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffef09b758 a2=74 a3=95 items=0 ppid=4047 pid=4141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:41.487000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 13 01:35:41.488000 audit: BPF prog-id=11 op=UNLOAD Sep 13 01:35:41.488000 audit[4141]: AVC avc: denied { bpf } for pid=4141 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.488000 audit[4141]: AVC avc: denied { bpf } for pid=4141 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.488000 audit[4141]: AVC avc: denied { perfmon } for pid=4141 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.488000 audit[4141]: AVC avc: denied { perfmon } for pid=4141 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.488000 audit[4141]: AVC avc: denied { perfmon } for pid=4141 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.488000 audit[4141]: AVC avc: denied { perfmon } for pid=4141 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.488000 audit[4141]: AVC avc: denied { perfmon } for pid=4141 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.488000 audit[4141]: AVC avc: denied { bpf } for pid=4141 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.488000 audit[4141]: AVC avc: denied { bpf } for pid=4141 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.488000 audit: BPF prog-id=12 op=LOAD Sep 13 01:35:41.488000 audit[4141]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffef09b788 a2=40 a3=ffffef09b7b8 items=0 ppid=4047 pid=4141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:41.488000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 13 01:35:41.488000 audit: BPF prog-id=12 op=UNLOAD Sep 13 01:35:41.488000 audit[4141]: AVC avc: denied { perfmon } for pid=4141 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.488000 audit[4141]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=0 a1=ffffef09b8a0 a2=50 a3=0 items=0 ppid=4047 pid=4141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:41.488000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 13 01:35:41.490000 audit[4142]: AVC avc: denied { bpf } for pid=4142 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.490000 audit[4142]: AVC avc: denied { bpf } for pid=4142 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.490000 audit[4142]: AVC avc: denied { perfmon } for pid=4142 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.490000 audit[4142]: AVC avc: denied { perfmon } for pid=4142 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.490000 audit[4142]: AVC avc: denied { perfmon } for pid=4142 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.490000 audit[4142]: AVC avc: denied { perfmon } for pid=4142 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.490000 audit[4142]: AVC avc: denied { perfmon } for pid=4142 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.490000 audit[4142]: AVC avc: denied { bpf } for pid=4142 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.490000 audit[4142]: AVC avc: denied { bpf } for pid=4142 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.490000 audit: BPF prog-id=13 op=LOAD Sep 13 01:35:41.490000 audit[4142]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe104a2a8 a2=98 a3=ffffe104a298 items=0 ppid=4047 pid=4142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:41.490000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 01:35:41.490000 audit: BPF prog-id=13 op=UNLOAD Sep 13 01:35:41.490000 audit[4142]: AVC avc: denied { bpf } for pid=4142 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.490000 audit[4142]: AVC avc: denied { bpf } for pid=4142 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.490000 audit[4142]: AVC avc: denied { perfmon } for pid=4142 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.490000 audit[4142]: AVC avc: denied { perfmon } for pid=4142 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.490000 audit[4142]: AVC avc: denied { perfmon } for pid=4142 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.490000 audit[4142]: AVC avc: denied { perfmon } for pid=4142 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.490000 audit[4142]: AVC avc: denied { perfmon } for pid=4142 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.490000 audit[4142]: AVC avc: denied { bpf } for pid=4142 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.490000 audit[4142]: AVC avc: denied { bpf } for pid=4142 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.490000 audit: BPF prog-id=14 op=LOAD Sep 13 01:35:41.490000 audit[4142]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe1049f38 a2=74 a3=95 items=0 ppid=4047 pid=4142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:41.490000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 01:35:41.491000 audit: BPF prog-id=14 op=UNLOAD Sep 13 01:35:41.491000 audit[4142]: AVC avc: denied { bpf } for pid=4142 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.491000 audit[4142]: AVC avc: denied { bpf } for pid=4142 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.491000 audit[4142]: AVC avc: denied { perfmon } for pid=4142 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.491000 audit[4142]: AVC avc: denied { perfmon } for pid=4142 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.491000 audit[4142]: AVC avc: denied { perfmon } for pid=4142 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.491000 audit[4142]: AVC avc: denied { perfmon } for pid=4142 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.491000 audit[4142]: AVC avc: denied { perfmon } for pid=4142 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.491000 audit[4142]: AVC avc: denied { bpf } for pid=4142 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.491000 audit[4142]: AVC avc: denied { bpf } for pid=4142 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.491000 audit: BPF prog-id=15 op=LOAD Sep 13 01:35:41.491000 audit[4142]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe1049f98 a2=94 a3=2 items=0 ppid=4047 pid=4142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:41.491000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 01:35:41.492000 audit: BPF prog-id=15 op=UNLOAD Sep 13 01:35:41.589000 audit[4142]: AVC avc: denied { bpf } for pid=4142 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.589000 audit[4142]: AVC avc: denied { bpf } for pid=4142 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.589000 audit[4142]: AVC avc: denied { perfmon } for pid=4142 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.589000 audit[4142]: AVC avc: denied { perfmon } for pid=4142 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.589000 audit[4142]: AVC avc: denied { perfmon } for pid=4142 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.589000 audit[4142]: AVC avc: denied { perfmon } for pid=4142 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.589000 audit[4142]: AVC avc: denied { perfmon } for pid=4142 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.589000 audit[4142]: AVC avc: denied { bpf } for pid=4142 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.589000 audit[4142]: AVC avc: denied { bpf } for pid=4142 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.589000 audit: BPF prog-id=16 op=LOAD Sep 13 01:35:41.589000 audit[4142]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe1049f58 a2=40 a3=ffffe1049f88 items=0 ppid=4047 pid=4142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:41.589000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 01:35:41.590000 audit: BPF prog-id=16 op=UNLOAD Sep 13 01:35:41.590000 audit[4142]: AVC avc: denied { perfmon } for pid=4142 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.590000 audit[4142]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=0 a1=ffffe104a070 a2=50 a3=0 items=0 ppid=4047 pid=4142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:41.590000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 01:35:41.598000 audit[4142]: AVC avc: denied { bpf } for pid=4142 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.598000 audit[4142]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffe1049fc8 a2=28 a3=ffffe104a0f8 items=0 ppid=4047 pid=4142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:41.598000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 01:35:41.598000 audit[4142]: AVC avc: denied { bpf } for pid=4142 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.598000 audit[4142]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffe1049ff8 a2=28 a3=ffffe104a128 items=0 ppid=4047 pid=4142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:41.598000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 01:35:41.598000 audit[4142]: AVC avc: denied { bpf } for pid=4142 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.598000 audit[4142]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffe1049ea8 a2=28 a3=ffffe1049fd8 items=0 ppid=4047 pid=4142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:41.598000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 01:35:41.598000 audit[4142]: AVC avc: denied { bpf } for pid=4142 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.598000 audit[4142]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffe104a018 a2=28 a3=ffffe104a148 items=0 ppid=4047 pid=4142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:41.598000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 01:35:41.598000 audit[4142]: AVC avc: denied { bpf } for pid=4142 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.598000 audit[4142]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffe1049ff8 a2=28 a3=ffffe104a128 items=0 ppid=4047 pid=4142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:41.598000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 01:35:41.598000 audit[4142]: AVC avc: denied { bpf } for pid=4142 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.598000 audit[4142]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffe1049fe8 a2=28 a3=ffffe104a118 items=0 ppid=4047 pid=4142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:41.598000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 01:35:41.598000 audit[4142]: AVC avc: denied { bpf } for pid=4142 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.598000 audit[4142]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffe104a018 a2=28 a3=ffffe104a148 items=0 ppid=4047 pid=4142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:41.598000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 01:35:41.598000 audit[4142]: AVC avc: denied { bpf } for pid=4142 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.598000 audit[4142]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffe1049ff8 a2=28 a3=ffffe104a128 items=0 ppid=4047 pid=4142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:41.598000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 01:35:41.598000 audit[4142]: AVC avc: denied { bpf } for pid=4142 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.598000 audit[4142]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffe104a018 a2=28 a3=ffffe104a148 items=0 ppid=4047 pid=4142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:41.598000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 01:35:41.598000 audit[4142]: AVC avc: denied { bpf } for pid=4142 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.598000 audit[4142]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffe1049fe8 a2=28 a3=ffffe104a118 items=0 ppid=4047 pid=4142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:41.598000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 01:35:41.598000 audit[4142]: AVC avc: denied { bpf } for pid=4142 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.598000 audit[4142]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffe104a068 a2=28 a3=ffffe104a1a8 items=0 ppid=4047 pid=4142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:41.598000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 01:35:41.599000 audit[4142]: AVC avc: denied { perfmon } for pid=4142 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.599000 audit[4142]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffe1049da0 a2=50 a3=0 items=0 ppid=4047 pid=4142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:41.599000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 01:35:41.599000 audit[4142]: AVC avc: denied { bpf } for pid=4142 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.599000 audit[4142]: AVC avc: denied { bpf } for pid=4142 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.599000 audit[4142]: AVC avc: denied { perfmon } for pid=4142 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.599000 audit[4142]: AVC avc: denied { perfmon } for pid=4142 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.599000 audit[4142]: AVC avc: denied { perfmon } for pid=4142 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.599000 audit[4142]: AVC avc: denied { perfmon } for pid=4142 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.599000 audit[4142]: AVC avc: denied { perfmon } for pid=4142 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.599000 audit[4142]: AVC avc: denied { bpf } for pid=4142 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.599000 audit[4142]: AVC avc: denied { bpf } for pid=4142 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.599000 audit: BPF prog-id=17 op=LOAD Sep 13 01:35:41.599000 audit[4142]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffe1049da8 a2=94 a3=5 items=0 ppid=4047 pid=4142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:41.599000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 01:35:41.599000 audit: BPF prog-id=17 op=UNLOAD Sep 13 01:35:41.599000 audit[4142]: AVC avc: denied { perfmon } for pid=4142 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.599000 audit[4142]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffe1049eb0 a2=50 a3=0 items=0 ppid=4047 pid=4142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:41.599000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 01:35:41.599000 audit[4142]: AVC avc: denied { bpf } for pid=4142 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.599000 audit[4142]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=16 a1=ffffe1049ff8 a2=4 a3=3 items=0 ppid=4047 pid=4142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:41.599000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 01:35:41.599000 audit[4142]: AVC avc: denied { bpf } for pid=4142 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.599000 audit[4142]: AVC avc: denied { bpf } for pid=4142 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.599000 audit[4142]: AVC avc: denied { perfmon } for pid=4142 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.599000 audit[4142]: AVC avc: denied { bpf } for pid=4142 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.599000 audit[4142]: AVC avc: denied { perfmon } for pid=4142 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.599000 audit[4142]: AVC avc: denied { perfmon } for pid=4142 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.599000 audit[4142]: AVC avc: denied { perfmon } for pid=4142 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.599000 audit[4142]: AVC avc: denied { perfmon } for pid=4142 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.599000 audit[4142]: AVC avc: denied { perfmon } for pid=4142 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.599000 audit[4142]: AVC avc: denied { bpf } for pid=4142 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.599000 audit[4142]: AVC avc: denied { confidentiality } for pid=4142 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 13 01:35:41.599000 audit[4142]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffe1049fd8 a2=94 a3=6 items=0 ppid=4047 pid=4142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:41.599000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 01:35:41.599000 audit[4142]: AVC avc: denied { bpf } for pid=4142 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.599000 audit[4142]: AVC avc: denied { bpf } for pid=4142 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.599000 audit[4142]: AVC avc: denied { perfmon } for pid=4142 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.599000 audit[4142]: AVC avc: denied { bpf } for pid=4142 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.599000 audit[4142]: AVC avc: denied { perfmon } for pid=4142 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.599000 audit[4142]: AVC avc: denied { perfmon } for pid=4142 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.599000 audit[4142]: AVC avc: denied { perfmon } for pid=4142 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.599000 audit[4142]: AVC avc: denied { perfmon } for pid=4142 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.599000 audit[4142]: AVC avc: denied { perfmon } for pid=4142 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.599000 audit[4142]: AVC avc: denied { bpf } for pid=4142 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.599000 audit[4142]: AVC avc: denied { confidentiality } for pid=4142 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 13 01:35:41.599000 audit[4142]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffe10497a8 a2=94 a3=83 items=0 ppid=4047 pid=4142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:41.599000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 01:35:41.599000 audit[4142]: AVC avc: denied { bpf } for pid=4142 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.599000 audit[4142]: AVC avc: denied { bpf } for pid=4142 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.599000 audit[4142]: AVC avc: denied { perfmon } for pid=4142 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.599000 audit[4142]: AVC avc: denied { bpf } for pid=4142 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.599000 audit[4142]: AVC avc: denied { perfmon } for pid=4142 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.599000 audit[4142]: AVC avc: denied { perfmon } for pid=4142 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.599000 audit[4142]: AVC avc: denied { perfmon } for pid=4142 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.599000 audit[4142]: AVC avc: denied { perfmon } for pid=4142 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.599000 audit[4142]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffe10497a8 a2=94 a3=83 items=0 ppid=4047 pid=4142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:41.599000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 01:35:41.608000 audit[4145]: AVC avc: denied { bpf } for pid=4145 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.608000 audit[4145]: AVC avc: denied { bpf } for pid=4145 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.608000 audit[4145]: AVC avc: denied { perfmon } for pid=4145 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.608000 audit[4145]: AVC avc: denied { perfmon } for pid=4145 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.608000 audit[4145]: AVC avc: denied { perfmon } for pid=4145 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.608000 audit[4145]: AVC avc: denied { perfmon } for pid=4145 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.608000 audit[4145]: AVC avc: denied { perfmon } for pid=4145 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.608000 audit[4145]: AVC avc: denied { bpf } for pid=4145 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.608000 audit[4145]: AVC avc: denied { bpf } for pid=4145 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.608000 audit: BPF prog-id=18 op=LOAD Sep 13 01:35:41.608000 audit[4145]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffeec57448 a2=98 a3=ffffeec57438 items=0 ppid=4047 pid=4145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:41.608000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Sep 13 01:35:41.608000 audit: BPF prog-id=18 op=UNLOAD Sep 13 01:35:41.608000 audit[4145]: AVC avc: denied { bpf } for pid=4145 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.608000 audit[4145]: AVC avc: denied { bpf } for pid=4145 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.608000 audit[4145]: AVC avc: denied { perfmon } for pid=4145 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.608000 audit[4145]: AVC avc: denied { perfmon } for pid=4145 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.608000 audit[4145]: AVC avc: denied { perfmon } for pid=4145 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.608000 audit[4145]: AVC avc: denied { perfmon } for pid=4145 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.608000 audit[4145]: AVC avc: denied { perfmon } for pid=4145 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.608000 audit[4145]: AVC avc: denied { bpf } for pid=4145 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.608000 audit[4145]: AVC avc: denied { bpf } for pid=4145 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.608000 audit: BPF prog-id=19 op=LOAD Sep 13 01:35:41.608000 audit[4145]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffeec572f8 a2=74 a3=95 items=0 ppid=4047 pid=4145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:41.608000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Sep 13 01:35:41.608000 audit: BPF prog-id=19 op=UNLOAD Sep 13 01:35:41.608000 audit[4145]: AVC avc: denied { bpf } for pid=4145 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.608000 audit[4145]: AVC avc: denied { bpf } for pid=4145 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.608000 audit[4145]: AVC avc: denied { perfmon } for pid=4145 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.608000 audit[4145]: AVC avc: denied { perfmon } for pid=4145 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.608000 audit[4145]: AVC avc: denied { perfmon } for pid=4145 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.608000 audit[4145]: AVC avc: denied { perfmon } for pid=4145 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.608000 audit[4145]: AVC avc: denied { perfmon } for pid=4145 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.608000 audit[4145]: AVC avc: denied { bpf } for pid=4145 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.608000 audit[4145]: AVC avc: denied { bpf } for pid=4145 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:41.608000 audit: BPF prog-id=20 op=LOAD Sep 13 01:35:41.608000 audit[4145]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffeec57328 a2=40 a3=ffffeec57358 items=0 ppid=4047 pid=4145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:41.608000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Sep 13 01:35:41.608000 audit: BPF prog-id=20 op=UNLOAD Sep 13 01:35:42.024695 systemd-networkd[1771]: cali6bf91d84c19: Gained IPv6LL Sep 13 01:35:42.131076 systemd-networkd[1771]: vxlan.calico: Link UP Sep 13 01:35:42.131082 systemd-networkd[1771]: vxlan.calico: Gained carrier Sep 13 01:35:42.147000 audit[4174]: AVC avc: denied { bpf } for pid=4174 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.147000 audit[4174]: AVC avc: denied { bpf } for pid=4174 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.147000 audit[4174]: AVC avc: denied { perfmon } for pid=4174 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.147000 audit[4174]: AVC avc: denied { perfmon } for pid=4174 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.147000 audit[4174]: AVC avc: denied { perfmon } for pid=4174 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.147000 audit[4174]: AVC avc: denied { perfmon } for pid=4174 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.147000 audit[4174]: AVC avc: denied { perfmon } for pid=4174 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.147000 audit[4174]: AVC avc: denied { bpf } for pid=4174 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.147000 audit[4174]: AVC avc: denied { bpf } for pid=4174 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.147000 audit: BPF prog-id=21 op=LOAD Sep 13 01:35:42.147000 audit[4174]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffcd9d6568 a2=98 a3=ffffcd9d6558 items=0 ppid=4047 pid=4174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:42.147000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 01:35:42.148000 audit: BPF prog-id=21 op=UNLOAD Sep 13 01:35:42.148000 audit[4174]: AVC avc: denied { bpf } for pid=4174 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.148000 audit[4174]: AVC avc: denied { bpf } for pid=4174 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.148000 audit[4174]: AVC avc: denied { perfmon } for pid=4174 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.148000 audit[4174]: AVC avc: denied { perfmon } for pid=4174 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.148000 audit[4174]: AVC avc: denied { perfmon } for pid=4174 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.148000 audit[4174]: AVC avc: denied { perfmon } for pid=4174 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.148000 audit[4174]: AVC avc: denied { perfmon } for pid=4174 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.148000 audit[4174]: AVC avc: denied { bpf } for pid=4174 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.148000 audit[4174]: AVC avc: denied { bpf } for pid=4174 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.148000 audit: BPF prog-id=22 op=LOAD Sep 13 01:35:42.148000 audit[4174]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffcd9d6248 a2=74 a3=95 items=0 ppid=4047 pid=4174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:42.148000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 01:35:42.148000 audit: BPF prog-id=22 op=UNLOAD Sep 13 01:35:42.148000 audit[4174]: AVC avc: denied { bpf } for pid=4174 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.148000 audit[4174]: AVC avc: denied { bpf } for pid=4174 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.148000 audit[4174]: AVC avc: denied { perfmon } for pid=4174 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.148000 audit[4174]: AVC avc: denied { perfmon } for pid=4174 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.148000 audit[4174]: AVC avc: denied { perfmon } for pid=4174 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.148000 audit[4174]: AVC avc: denied { perfmon } for pid=4174 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.148000 audit[4174]: AVC avc: denied { perfmon } for pid=4174 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.148000 audit[4174]: AVC avc: denied { bpf } for pid=4174 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.148000 audit[4174]: AVC avc: denied { bpf } for pid=4174 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.148000 audit: BPF prog-id=23 op=LOAD Sep 13 01:35:42.148000 audit[4174]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffcd9d62a8 a2=94 a3=2 items=0 ppid=4047 pid=4174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:42.148000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 01:35:42.148000 audit: BPF prog-id=23 op=UNLOAD Sep 13 01:35:42.148000 audit[4174]: AVC avc: denied { bpf } for pid=4174 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.148000 audit[4174]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffcd9d62d8 a2=28 a3=ffffcd9d6408 items=0 ppid=4047 pid=4174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:42.148000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 01:35:42.148000 audit[4174]: AVC avc: denied { bpf } for pid=4174 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.148000 audit[4174]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcd9d6308 a2=28 a3=ffffcd9d6438 items=0 ppid=4047 pid=4174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:42.148000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 01:35:42.148000 audit[4174]: AVC avc: denied { bpf } for pid=4174 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.148000 audit[4174]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcd9d61b8 a2=28 a3=ffffcd9d62e8 items=0 ppid=4047 pid=4174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:42.148000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 01:35:42.148000 audit[4174]: AVC avc: denied { bpf } for pid=4174 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.148000 audit[4174]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffcd9d6328 a2=28 a3=ffffcd9d6458 items=0 ppid=4047 pid=4174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:42.148000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 01:35:42.148000 audit[4174]: AVC avc: denied { bpf } for pid=4174 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.148000 audit[4174]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffcd9d6308 a2=28 a3=ffffcd9d6438 items=0 ppid=4047 pid=4174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:42.148000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 01:35:42.148000 audit[4174]: AVC avc: denied { bpf } for pid=4174 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.148000 audit[4174]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffcd9d62f8 a2=28 a3=ffffcd9d6428 items=0 ppid=4047 pid=4174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:42.148000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 01:35:42.148000 audit[4174]: AVC avc: denied { bpf } for pid=4174 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.148000 audit[4174]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffcd9d6328 a2=28 a3=ffffcd9d6458 items=0 ppid=4047 pid=4174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:42.148000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 01:35:42.148000 audit[4174]: AVC avc: denied { bpf } for pid=4174 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.148000 audit[4174]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcd9d6308 a2=28 a3=ffffcd9d6438 items=0 ppid=4047 pid=4174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:42.148000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 01:35:42.148000 audit[4174]: AVC avc: denied { bpf } for pid=4174 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.148000 audit[4174]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcd9d6328 a2=28 a3=ffffcd9d6458 items=0 ppid=4047 pid=4174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:42.148000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 01:35:42.148000 audit[4174]: AVC avc: denied { bpf } for pid=4174 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.148000 audit[4174]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcd9d62f8 a2=28 a3=ffffcd9d6428 items=0 ppid=4047 pid=4174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:42.148000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 01:35:42.148000 audit[4174]: AVC avc: denied { bpf } for pid=4174 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.148000 audit[4174]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffcd9d6378 a2=28 a3=ffffcd9d64b8 items=0 ppid=4047 pid=4174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:42.148000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 01:35:42.148000 audit[4174]: AVC avc: denied { bpf } for pid=4174 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.148000 audit[4174]: AVC avc: denied { bpf } for pid=4174 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.148000 audit[4174]: AVC avc: denied { perfmon } for pid=4174 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.148000 audit[4174]: AVC avc: denied { perfmon } for pid=4174 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.148000 audit[4174]: AVC avc: denied { perfmon } for pid=4174 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.148000 audit[4174]: AVC avc: denied { perfmon } for pid=4174 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.148000 audit[4174]: AVC avc: denied { perfmon } for pid=4174 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.148000 audit[4174]: AVC avc: denied { bpf } for pid=4174 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.148000 audit[4174]: AVC avc: denied { bpf } for pid=4174 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.148000 audit: BPF prog-id=24 op=LOAD Sep 13 01:35:42.148000 audit[4174]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffcd9d6198 a2=40 a3=ffffcd9d61c8 items=0 ppid=4047 pid=4174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:42.148000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 01:35:42.148000 audit: BPF prog-id=24 op=UNLOAD Sep 13 01:35:42.148000 audit[4174]: AVC avc: denied { bpf } for pid=4174 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.148000 audit[4174]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=0 a1=ffffcd9d61c0 a2=50 a3=0 items=0 ppid=4047 pid=4174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:42.148000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 01:35:42.148000 audit[4174]: AVC avc: denied { bpf } for pid=4174 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.148000 audit[4174]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=0 a1=ffffcd9d61c0 a2=50 a3=0 items=0 ppid=4047 pid=4174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:42.148000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 01:35:42.149000 audit[4174]: AVC avc: denied { bpf } for pid=4174 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.149000 audit[4174]: AVC avc: denied { bpf } for pid=4174 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.149000 audit[4174]: AVC avc: denied { bpf } for pid=4174 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.149000 audit[4174]: AVC avc: denied { perfmon } for pid=4174 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.149000 audit[4174]: AVC avc: denied { perfmon } for pid=4174 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.149000 audit[4174]: AVC avc: denied { perfmon } for pid=4174 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.149000 audit[4174]: AVC avc: denied { perfmon } for pid=4174 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.149000 audit[4174]: AVC avc: denied { perfmon } for pid=4174 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.149000 audit[4174]: AVC avc: denied { bpf } for pid=4174 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.149000 audit[4174]: AVC avc: denied { bpf } for pid=4174 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.149000 audit: BPF prog-id=25 op=LOAD Sep 13 01:35:42.149000 audit[4174]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffcd9d5928 a2=94 a3=2 items=0 ppid=4047 pid=4174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:42.149000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 01:35:42.149000 audit: BPF prog-id=25 op=UNLOAD Sep 13 01:35:42.149000 audit[4174]: AVC avc: denied { bpf } for pid=4174 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.149000 audit[4174]: AVC avc: denied { bpf } for pid=4174 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.149000 audit[4174]: AVC avc: denied { bpf } for pid=4174 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.149000 audit[4174]: AVC avc: denied { perfmon } for pid=4174 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.149000 audit[4174]: AVC avc: denied { perfmon } for pid=4174 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.149000 audit[4174]: AVC avc: denied { perfmon } for pid=4174 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.149000 audit[4174]: AVC avc: denied { perfmon } for pid=4174 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.149000 audit[4174]: AVC avc: denied { perfmon } for pid=4174 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.149000 audit[4174]: AVC avc: denied { bpf } for pid=4174 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.149000 audit[4174]: AVC avc: denied { bpf } for pid=4174 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.149000 audit: BPF prog-id=26 op=LOAD Sep 13 01:35:42.149000 audit[4174]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffcd9d5ab8 a2=94 a3=30 items=0 ppid=4047 pid=4174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:42.149000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 01:35:42.152000 audit[4176]: AVC avc: denied { bpf } for pid=4176 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.152000 audit[4176]: AVC avc: denied { bpf } for pid=4176 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.152000 audit[4176]: AVC avc: denied { perfmon } for pid=4176 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.152000 audit[4176]: AVC avc: denied { perfmon } for pid=4176 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.152000 audit[4176]: AVC avc: denied { perfmon } for pid=4176 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.152000 audit[4176]: AVC avc: denied { perfmon } for pid=4176 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.152000 audit[4176]: AVC avc: denied { perfmon } for pid=4176 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.152000 audit[4176]: AVC avc: denied { bpf } for pid=4176 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.152000 audit[4176]: AVC avc: denied { bpf } for pid=4176 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.152000 audit: BPF prog-id=27 op=LOAD Sep 13 01:35:42.152000 audit[4176]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffde5bcbf8 a2=98 a3=ffffde5bcbe8 items=0 ppid=4047 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:42.152000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 01:35:42.153000 audit: BPF prog-id=27 op=UNLOAD Sep 13 01:35:42.153000 audit[4176]: AVC avc: denied { bpf } for pid=4176 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.153000 audit[4176]: AVC avc: denied { bpf } for pid=4176 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.153000 audit[4176]: AVC avc: denied { perfmon } for pid=4176 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.153000 audit[4176]: AVC avc: denied { perfmon } for pid=4176 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.153000 audit[4176]: AVC avc: denied { perfmon } for pid=4176 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.153000 audit[4176]: AVC avc: denied { perfmon } for pid=4176 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.153000 audit[4176]: AVC avc: denied { perfmon } for pid=4176 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.153000 audit[4176]: AVC avc: denied { bpf } for pid=4176 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.153000 audit[4176]: AVC avc: denied { bpf } for pid=4176 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.153000 audit: BPF prog-id=28 op=LOAD Sep 13 01:35:42.153000 audit[4176]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffde5bc888 a2=74 a3=95 items=0 ppid=4047 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:42.153000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 01:35:42.153000 audit: BPF prog-id=28 op=UNLOAD Sep 13 01:35:42.153000 audit[4176]: AVC avc: denied { bpf } for pid=4176 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.153000 audit[4176]: AVC avc: denied { bpf } for pid=4176 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.153000 audit[4176]: AVC avc: denied { perfmon } for pid=4176 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.153000 audit[4176]: AVC avc: denied { perfmon } for pid=4176 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.153000 audit[4176]: AVC avc: denied { perfmon } for pid=4176 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.153000 audit[4176]: AVC avc: denied { perfmon } for pid=4176 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.153000 audit[4176]: AVC avc: denied { perfmon } for pid=4176 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.153000 audit[4176]: AVC avc: denied { bpf } for pid=4176 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.153000 audit[4176]: AVC avc: denied { bpf } for pid=4176 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.153000 audit: BPF prog-id=29 op=LOAD Sep 13 01:35:42.153000 audit[4176]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffde5bc8e8 a2=94 a3=2 items=0 ppid=4047 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:42.153000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 01:35:42.153000 audit: BPF prog-id=29 op=UNLOAD Sep 13 01:35:42.263000 audit[4176]: AVC avc: denied { bpf } for pid=4176 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.263000 audit[4176]: AVC avc: denied { bpf } for pid=4176 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.263000 audit[4176]: AVC avc: denied { perfmon } for pid=4176 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.263000 audit[4176]: AVC avc: denied { perfmon } for pid=4176 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.263000 audit[4176]: AVC avc: denied { perfmon } for pid=4176 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.263000 audit[4176]: AVC avc: denied { perfmon } for pid=4176 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.263000 audit[4176]: AVC avc: denied { perfmon } for pid=4176 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.263000 audit[4176]: AVC avc: denied { bpf } for pid=4176 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.263000 audit[4176]: AVC avc: denied { bpf } for pid=4176 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.263000 audit: BPF prog-id=30 op=LOAD Sep 13 01:35:42.263000 audit[4176]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffde5bc8a8 a2=40 a3=ffffde5bc8d8 items=0 ppid=4047 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:42.263000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 01:35:42.263000 audit: BPF prog-id=30 op=UNLOAD Sep 13 01:35:42.263000 audit[4176]: AVC avc: denied { perfmon } for pid=4176 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.263000 audit[4176]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=0 a1=ffffde5bc9c0 a2=50 a3=0 items=0 ppid=4047 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:42.263000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 01:35:42.272000 audit[4176]: AVC avc: denied { bpf } for pid=4176 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.272000 audit[4176]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffde5bc918 a2=28 a3=ffffde5bca48 items=0 ppid=4047 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:42.272000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 01:35:42.272000 audit[4176]: AVC avc: denied { bpf } for pid=4176 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.272000 audit[4176]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffde5bc948 a2=28 a3=ffffde5bca78 items=0 ppid=4047 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:42.272000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 01:35:42.272000 audit[4176]: AVC avc: denied { bpf } for pid=4176 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.272000 audit[4176]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffde5bc7f8 a2=28 a3=ffffde5bc928 items=0 ppid=4047 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:42.272000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 01:35:42.272000 audit[4176]: AVC avc: denied { bpf } for pid=4176 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.272000 audit[4176]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffde5bc968 a2=28 a3=ffffde5bca98 items=0 ppid=4047 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:42.272000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 01:35:42.272000 audit[4176]: AVC avc: denied { bpf } for pid=4176 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.272000 audit[4176]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffde5bc948 a2=28 a3=ffffde5bca78 items=0 ppid=4047 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:42.272000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 01:35:42.272000 audit[4176]: AVC avc: denied { bpf } for pid=4176 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.272000 audit[4176]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffde5bc938 a2=28 a3=ffffde5bca68 items=0 ppid=4047 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:42.272000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 01:35:42.272000 audit[4176]: AVC avc: denied { bpf } for pid=4176 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.272000 audit[4176]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffde5bc968 a2=28 a3=ffffde5bca98 items=0 ppid=4047 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:42.272000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 01:35:42.272000 audit[4176]: AVC avc: denied { bpf } for pid=4176 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.272000 audit[4176]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffde5bc948 a2=28 a3=ffffde5bca78 items=0 ppid=4047 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:42.272000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 01:35:42.272000 audit[4176]: AVC avc: denied { bpf } for pid=4176 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.272000 audit[4176]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffde5bc968 a2=28 a3=ffffde5bca98 items=0 ppid=4047 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:42.272000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 01:35:42.272000 audit[4176]: AVC avc: denied { bpf } for pid=4176 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.272000 audit[4176]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffde5bc938 a2=28 a3=ffffde5bca68 items=0 ppid=4047 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:42.272000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 01:35:42.272000 audit[4176]: AVC avc: denied { bpf } for pid=4176 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.272000 audit[4176]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffde5bc9b8 a2=28 a3=ffffde5bcaf8 items=0 ppid=4047 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:42.272000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 01:35:42.273000 audit[4176]: AVC avc: denied { perfmon } for pid=4176 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.273000 audit[4176]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffde5bc6f0 a2=50 a3=0 items=0 ppid=4047 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:42.273000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 01:35:42.273000 audit[4176]: AVC avc: denied { bpf } for pid=4176 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.273000 audit[4176]: AVC avc: denied { bpf } for pid=4176 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.273000 audit[4176]: AVC avc: denied { perfmon } for pid=4176 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.273000 audit[4176]: AVC avc: denied { perfmon } for pid=4176 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.273000 audit[4176]: AVC avc: denied { perfmon } for pid=4176 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.273000 audit[4176]: AVC avc: denied { perfmon } for pid=4176 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.273000 audit[4176]: AVC avc: denied { perfmon } for pid=4176 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.273000 audit[4176]: AVC avc: denied { bpf } for pid=4176 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.273000 audit[4176]: AVC avc: denied { bpf } for pid=4176 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.273000 audit: BPF prog-id=31 op=LOAD Sep 13 01:35:42.273000 audit[4176]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffde5bc6f8 a2=94 a3=5 items=0 ppid=4047 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:42.273000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 01:35:42.273000 audit: BPF prog-id=31 op=UNLOAD Sep 13 01:35:42.273000 audit[4176]: AVC avc: denied { perfmon } for pid=4176 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.273000 audit[4176]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffde5bc800 a2=50 a3=0 items=0 ppid=4047 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:42.273000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 01:35:42.273000 audit[4176]: AVC avc: denied { bpf } for pid=4176 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.273000 audit[4176]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=16 a1=ffffde5bc948 a2=4 a3=3 items=0 ppid=4047 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:42.273000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 01:35:42.273000 audit[4176]: AVC avc: denied { bpf } for pid=4176 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.273000 audit[4176]: AVC avc: denied { bpf } for pid=4176 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.273000 audit[4176]: AVC avc: denied { perfmon } for pid=4176 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.273000 audit[4176]: AVC avc: denied { bpf } for pid=4176 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.273000 audit[4176]: AVC avc: denied { perfmon } for pid=4176 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.273000 audit[4176]: AVC avc: denied { perfmon } for pid=4176 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.273000 audit[4176]: AVC avc: denied { perfmon } for pid=4176 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.273000 audit[4176]: AVC avc: denied { perfmon } for pid=4176 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.273000 audit[4176]: AVC avc: denied { perfmon } for pid=4176 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.273000 audit[4176]: AVC avc: denied { bpf } for pid=4176 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.273000 audit[4176]: AVC avc: denied { confidentiality } for pid=4176 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 13 01:35:42.273000 audit[4176]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffde5bc928 a2=94 a3=6 items=0 ppid=4047 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:42.273000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 01:35:42.273000 audit[4176]: AVC avc: denied { bpf } for pid=4176 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.273000 audit[4176]: AVC avc: denied { bpf } for pid=4176 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.273000 audit[4176]: AVC avc: denied { perfmon } for pid=4176 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.273000 audit[4176]: AVC avc: denied { bpf } for pid=4176 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.273000 audit[4176]: AVC avc: denied { perfmon } for pid=4176 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.273000 audit[4176]: AVC avc: denied { perfmon } for pid=4176 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.273000 audit[4176]: AVC avc: denied { perfmon } for pid=4176 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.273000 audit[4176]: AVC avc: denied { perfmon } for pid=4176 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.273000 audit[4176]: AVC avc: denied { perfmon } for pid=4176 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.273000 audit[4176]: AVC avc: denied { bpf } for pid=4176 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.273000 audit[4176]: AVC avc: denied { confidentiality } for pid=4176 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 13 01:35:42.273000 audit[4176]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffde5bc0f8 a2=94 a3=83 items=0 ppid=4047 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:42.273000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 01:35:42.273000 audit[4176]: AVC avc: denied { bpf } for pid=4176 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.273000 audit[4176]: AVC avc: denied { bpf } for pid=4176 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.273000 audit[4176]: AVC avc: denied { perfmon } for pid=4176 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.273000 audit[4176]: AVC avc: denied { bpf } for pid=4176 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.273000 audit[4176]: AVC avc: denied { perfmon } for pid=4176 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.273000 audit[4176]: AVC avc: denied { perfmon } for pid=4176 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.273000 audit[4176]: AVC avc: denied { perfmon } for pid=4176 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.273000 audit[4176]: AVC avc: denied { perfmon } for pid=4176 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.273000 audit[4176]: AVC avc: denied { perfmon } for pid=4176 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.273000 audit[4176]: AVC avc: denied { bpf } for pid=4176 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.273000 audit[4176]: AVC avc: denied { confidentiality } for pid=4176 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 13 01:35:42.273000 audit[4176]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffde5bc0f8 a2=94 a3=83 items=0 ppid=4047 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:42.273000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 01:35:42.273000 audit[4176]: AVC avc: denied { bpf } for pid=4176 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.273000 audit[4176]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffde5bdb38 a2=10 a3=ffffde5bdc28 items=0 ppid=4047 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:42.273000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 01:35:42.274000 audit[4176]: AVC avc: denied { bpf } for pid=4176 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.274000 audit[4176]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffde5bd9f8 a2=10 a3=ffffde5bdae8 items=0 ppid=4047 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:42.274000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 01:35:42.274000 audit[4176]: AVC avc: denied { bpf } for pid=4176 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.274000 audit[4176]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffde5bd968 a2=10 a3=ffffde5bdae8 items=0 ppid=4047 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:42.274000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 01:35:42.274000 audit[4176]: AVC avc: denied { bpf } for pid=4176 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 01:35:42.274000 audit[4176]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffde5bd968 a2=10 a3=ffffde5bdae8 items=0 ppid=4047 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:42.274000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 01:35:42.289000 audit: BPF prog-id=26 op=UNLOAD Sep 13 01:35:42.965000 audit[4203]: NETFILTER_CFG table=mangle:108 family=2 entries=16 op=nft_register_chain pid=4203 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 01:35:42.965000 audit[4203]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=fffff74aa6a0 a2=0 a3=ffffb29d6fa8 items=0 ppid=4047 pid=4203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:42.965000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 01:35:43.001000 audit[4200]: NETFILTER_CFG table=nat:109 family=2 entries=15 op=nft_register_chain pid=4200 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 01:35:43.001000 audit[4200]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffd8f9c5b0 a2=0 a3=ffffa8912fa8 items=0 ppid=4047 pid=4200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:43.001000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 01:35:43.046000 audit[4202]: NETFILTER_CFG table=raw:110 family=2 entries=21 op=nft_register_chain pid=4202 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 01:35:43.046000 audit[4202]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=ffffdc62fe30 a2=0 a3=ffff876f6fa8 items=0 ppid=4047 pid=4202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:43.046000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 01:35:43.065000 audit[4201]: NETFILTER_CFG table=filter:111 family=2 entries=94 op=nft_register_chain pid=4201 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 01:35:43.065000 audit[4201]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=53116 a0=3 a1=ffffe3b09480 a2=0 a3=ffffbd6c4fa8 items=0 ppid=4047 pid=4201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:43.065000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 01:35:43.603929 env[1589]: time="2025-09-13T01:35:43.603847413Z" level=info msg="StopPodSandbox for \"344f38d5a059f7e4592718e4e7ba8a28daaf8fa0cd9a811330fa9878a5a7b7b0\"" Sep 13 01:35:43.604468 env[1589]: time="2025-09-13T01:35:43.604439573Z" level=info msg="StopPodSandbox for \"f4d24476210993c3403cd8ff76dfe52d10ecb5b01f473c4ba33b9328b2701a33\"" Sep 13 01:35:43.604698 env[1589]: time="2025-09-13T01:35:43.604670532Z" level=info msg="StopPodSandbox for \"b07d8d5fac065df8906204b4ac57ed114b4c9c0be4ad0bffbba4566fa4449be1\"" Sep 13 01:35:43.771086 env[1589]: 2025-09-13 01:35:43.686 [INFO][4241] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="344f38d5a059f7e4592718e4e7ba8a28daaf8fa0cd9a811330fa9878a5a7b7b0" Sep 13 01:35:43.771086 env[1589]: 2025-09-13 01:35:43.686 [INFO][4241] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="344f38d5a059f7e4592718e4e7ba8a28daaf8fa0cd9a811330fa9878a5a7b7b0" iface="eth0" netns="/var/run/netns/cni-a7445bc6-10af-d60c-4f43-8e5f14392c5f" Sep 13 01:35:43.771086 env[1589]: 2025-09-13 01:35:43.686 [INFO][4241] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="344f38d5a059f7e4592718e4e7ba8a28daaf8fa0cd9a811330fa9878a5a7b7b0" iface="eth0" netns="/var/run/netns/cni-a7445bc6-10af-d60c-4f43-8e5f14392c5f" Sep 13 01:35:43.771086 env[1589]: 2025-09-13 01:35:43.686 [INFO][4241] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="344f38d5a059f7e4592718e4e7ba8a28daaf8fa0cd9a811330fa9878a5a7b7b0" iface="eth0" netns="/var/run/netns/cni-a7445bc6-10af-d60c-4f43-8e5f14392c5f" Sep 13 01:35:43.771086 env[1589]: 2025-09-13 01:35:43.687 [INFO][4241] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="344f38d5a059f7e4592718e4e7ba8a28daaf8fa0cd9a811330fa9878a5a7b7b0" Sep 13 01:35:43.771086 env[1589]: 2025-09-13 01:35:43.687 [INFO][4241] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="344f38d5a059f7e4592718e4e7ba8a28daaf8fa0cd9a811330fa9878a5a7b7b0" Sep 13 01:35:43.771086 env[1589]: 2025-09-13 01:35:43.754 [INFO][4264] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="344f38d5a059f7e4592718e4e7ba8a28daaf8fa0cd9a811330fa9878a5a7b7b0" HandleID="k8s-pod-network.344f38d5a059f7e4592718e4e7ba8a28daaf8fa0cd9a811330fa9878a5a7b7b0" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-coredns--7c65d6cfc9--7rjjf-eth0" Sep 13 01:35:43.771086 env[1589]: 2025-09-13 01:35:43.755 [INFO][4264] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:35:43.771086 env[1589]: 2025-09-13 01:35:43.755 [INFO][4264] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:35:43.771086 env[1589]: 2025-09-13 01:35:43.767 [WARNING][4264] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="344f38d5a059f7e4592718e4e7ba8a28daaf8fa0cd9a811330fa9878a5a7b7b0" HandleID="k8s-pod-network.344f38d5a059f7e4592718e4e7ba8a28daaf8fa0cd9a811330fa9878a5a7b7b0" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-coredns--7c65d6cfc9--7rjjf-eth0" Sep 13 01:35:43.771086 env[1589]: 2025-09-13 01:35:43.767 [INFO][4264] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="344f38d5a059f7e4592718e4e7ba8a28daaf8fa0cd9a811330fa9878a5a7b7b0" HandleID="k8s-pod-network.344f38d5a059f7e4592718e4e7ba8a28daaf8fa0cd9a811330fa9878a5a7b7b0" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-coredns--7c65d6cfc9--7rjjf-eth0" Sep 13 01:35:43.771086 env[1589]: 2025-09-13 01:35:43.768 [INFO][4264] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:35:43.771086 env[1589]: 2025-09-13 01:35:43.769 [INFO][4241] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="344f38d5a059f7e4592718e4e7ba8a28daaf8fa0cd9a811330fa9878a5a7b7b0" Sep 13 01:35:43.773667 systemd[1]: run-netns-cni\x2da7445bc6\x2d10af\x2dd60c\x2d4f43\x2d8e5f14392c5f.mount: Deactivated successfully. Sep 13 01:35:43.775618 env[1589]: time="2025-09-13T01:35:43.775577624Z" level=info msg="TearDown network for sandbox \"344f38d5a059f7e4592718e4e7ba8a28daaf8fa0cd9a811330fa9878a5a7b7b0\" successfully" Sep 13 01:35:43.775699 env[1589]: time="2025-09-13T01:35:43.775684144Z" level=info msg="StopPodSandbox for \"344f38d5a059f7e4592718e4e7ba8a28daaf8fa0cd9a811330fa9878a5a7b7b0\" returns successfully" Sep 13 01:35:43.776385 env[1589]: time="2025-09-13T01:35:43.776357784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7rjjf,Uid:18937242-9ba8-46ed-bd00-32bfe4ab9056,Namespace:kube-system,Attempt:1,}" Sep 13 01:35:43.819783 env[1589]: 2025-09-13 01:35:43.706 [INFO][4243] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b07d8d5fac065df8906204b4ac57ed114b4c9c0be4ad0bffbba4566fa4449be1" Sep 13 01:35:43.819783 env[1589]: 2025-09-13 01:35:43.706 [INFO][4243] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b07d8d5fac065df8906204b4ac57ed114b4c9c0be4ad0bffbba4566fa4449be1" iface="eth0" netns="/var/run/netns/cni-202838d4-fd49-b6ec-85bb-69f5c94d6487" Sep 13 01:35:43.819783 env[1589]: 2025-09-13 01:35:43.706 [INFO][4243] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b07d8d5fac065df8906204b4ac57ed114b4c9c0be4ad0bffbba4566fa4449be1" iface="eth0" netns="/var/run/netns/cni-202838d4-fd49-b6ec-85bb-69f5c94d6487" Sep 13 01:35:43.819783 env[1589]: 2025-09-13 01:35:43.706 [INFO][4243] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b07d8d5fac065df8906204b4ac57ed114b4c9c0be4ad0bffbba4566fa4449be1" iface="eth0" netns="/var/run/netns/cni-202838d4-fd49-b6ec-85bb-69f5c94d6487" Sep 13 01:35:43.819783 env[1589]: 2025-09-13 01:35:43.706 [INFO][4243] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b07d8d5fac065df8906204b4ac57ed114b4c9c0be4ad0bffbba4566fa4449be1" Sep 13 01:35:43.819783 env[1589]: 2025-09-13 01:35:43.706 [INFO][4243] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b07d8d5fac065df8906204b4ac57ed114b4c9c0be4ad0bffbba4566fa4449be1" Sep 13 01:35:43.819783 env[1589]: 2025-09-13 01:35:43.787 [INFO][4272] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b07d8d5fac065df8906204b4ac57ed114b4c9c0be4ad0bffbba4566fa4449be1" HandleID="k8s-pod-network.b07d8d5fac065df8906204b4ac57ed114b4c9c0be4ad0bffbba4566fa4449be1" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-calico--kube--controllers--84d8c5c799--tjhw6-eth0" Sep 13 01:35:43.819783 env[1589]: 2025-09-13 01:35:43.788 [INFO][4272] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:35:43.819783 env[1589]: 2025-09-13 01:35:43.788 [INFO][4272] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:35:43.819783 env[1589]: 2025-09-13 01:35:43.798 [WARNING][4272] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b07d8d5fac065df8906204b4ac57ed114b4c9c0be4ad0bffbba4566fa4449be1" HandleID="k8s-pod-network.b07d8d5fac065df8906204b4ac57ed114b4c9c0be4ad0bffbba4566fa4449be1" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-calico--kube--controllers--84d8c5c799--tjhw6-eth0" Sep 13 01:35:43.819783 env[1589]: 2025-09-13 01:35:43.798 [INFO][4272] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b07d8d5fac065df8906204b4ac57ed114b4c9c0be4ad0bffbba4566fa4449be1" HandleID="k8s-pod-network.b07d8d5fac065df8906204b4ac57ed114b4c9c0be4ad0bffbba4566fa4449be1" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-calico--kube--controllers--84d8c5c799--tjhw6-eth0" Sep 13 01:35:43.819783 env[1589]: 2025-09-13 01:35:43.799 [INFO][4272] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:35:43.819783 env[1589]: 2025-09-13 01:35:43.808 [INFO][4243] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b07d8d5fac065df8906204b4ac57ed114b4c9c0be4ad0bffbba4566fa4449be1" Sep 13 01:35:43.822991 env[1589]: time="2025-09-13T01:35:43.821356945Z" level=info msg="TearDown network for sandbox \"b07d8d5fac065df8906204b4ac57ed114b4c9c0be4ad0bffbba4566fa4449be1\" successfully" Sep 13 01:35:43.822991 env[1589]: time="2025-09-13T01:35:43.821396865Z" level=info msg="StopPodSandbox for \"b07d8d5fac065df8906204b4ac57ed114b4c9c0be4ad0bffbba4566fa4449be1\" returns successfully" Sep 13 01:35:43.822008 systemd[1]: run-netns-cni\x2d202838d4\x2dfd49\x2db6ec\x2d85bb\x2d69f5c94d6487.mount: Deactivated successfully. Sep 13 01:35:43.823138 env[1589]: time="2025-09-13T01:35:43.823003983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84d8c5c799-tjhw6,Uid:ccc7d775-178b-498c-91bf-f43ba47754ca,Namespace:calico-system,Attempt:1,}" Sep 13 01:35:43.831226 env[1589]: 2025-09-13 01:35:43.713 [INFO][4252] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f4d24476210993c3403cd8ff76dfe52d10ecb5b01f473c4ba33b9328b2701a33" Sep 13 01:35:43.831226 env[1589]: 2025-09-13 01:35:43.713 [INFO][4252] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f4d24476210993c3403cd8ff76dfe52d10ecb5b01f473c4ba33b9328b2701a33" iface="eth0" netns="/var/run/netns/cni-22466d5d-2af0-4182-d01a-1261368a88b5" Sep 13 01:35:43.831226 env[1589]: 2025-09-13 01:35:43.713 [INFO][4252] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f4d24476210993c3403cd8ff76dfe52d10ecb5b01f473c4ba33b9328b2701a33" iface="eth0" netns="/var/run/netns/cni-22466d5d-2af0-4182-d01a-1261368a88b5" Sep 13 01:35:43.831226 env[1589]: 2025-09-13 01:35:43.713 [INFO][4252] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f4d24476210993c3403cd8ff76dfe52d10ecb5b01f473c4ba33b9328b2701a33" iface="eth0" netns="/var/run/netns/cni-22466d5d-2af0-4182-d01a-1261368a88b5" Sep 13 01:35:43.831226 env[1589]: 2025-09-13 01:35:43.713 [INFO][4252] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f4d24476210993c3403cd8ff76dfe52d10ecb5b01f473c4ba33b9328b2701a33" Sep 13 01:35:43.831226 env[1589]: 2025-09-13 01:35:43.713 [INFO][4252] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f4d24476210993c3403cd8ff76dfe52d10ecb5b01f473c4ba33b9328b2701a33" Sep 13 01:35:43.831226 env[1589]: 2025-09-13 01:35:43.789 [INFO][4273] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f4d24476210993c3403cd8ff76dfe52d10ecb5b01f473c4ba33b9328b2701a33" HandleID="k8s-pod-network.f4d24476210993c3403cd8ff76dfe52d10ecb5b01f473c4ba33b9328b2701a33" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dx4zf-eth0" Sep 13 01:35:43.831226 env[1589]: 2025-09-13 01:35:43.790 [INFO][4273] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:35:43.831226 env[1589]: 2025-09-13 01:35:43.799 [INFO][4273] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:35:43.831226 env[1589]: 2025-09-13 01:35:43.824 [WARNING][4273] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f4d24476210993c3403cd8ff76dfe52d10ecb5b01f473c4ba33b9328b2701a33" HandleID="k8s-pod-network.f4d24476210993c3403cd8ff76dfe52d10ecb5b01f473c4ba33b9328b2701a33" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dx4zf-eth0" Sep 13 01:35:43.831226 env[1589]: 2025-09-13 01:35:43.824 [INFO][4273] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f4d24476210993c3403cd8ff76dfe52d10ecb5b01f473c4ba33b9328b2701a33" HandleID="k8s-pod-network.f4d24476210993c3403cd8ff76dfe52d10ecb5b01f473c4ba33b9328b2701a33" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dx4zf-eth0" Sep 13 01:35:43.831226 env[1589]: 2025-09-13 01:35:43.827 [INFO][4273] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:35:43.831226 env[1589]: 2025-09-13 01:35:43.829 [INFO][4252] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f4d24476210993c3403cd8ff76dfe52d10ecb5b01f473c4ba33b9328b2701a33" Sep 13 01:35:43.836169 env[1589]: time="2025-09-13T01:35:43.834596613Z" level=info msg="TearDown network for sandbox \"f4d24476210993c3403cd8ff76dfe52d10ecb5b01f473c4ba33b9328b2701a33\" successfully" Sep 13 01:35:43.836169 env[1589]: time="2025-09-13T01:35:43.834635053Z" level=info msg="StopPodSandbox for \"f4d24476210993c3403cd8ff76dfe52d10ecb5b01f473c4ba33b9328b2701a33\" returns successfully" Sep 13 01:35:43.833756 systemd[1]: run-netns-cni\x2d22466d5d\x2d2af0\x2d4182\x2dd01a\x2d1261368a88b5.mount: Deactivated successfully. Sep 13 01:35:43.836688 env[1589]: time="2025-09-13T01:35:43.836648771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-948d44db6-dx4zf,Uid:42ab5820-ae33-4608-be70-9aaefef7b587,Namespace:calico-apiserver,Attempt:1,}" Sep 13 01:35:43.942387 systemd-networkd[1771]: vxlan.calico: Gained IPv6LL Sep 13 01:35:43.960457 env[1589]: time="2025-09-13T01:35:43.960415344Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:35:43.966484 env[1589]: time="2025-09-13T01:35:43.966432259Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:35:43.972194 env[1589]: time="2025-09-13T01:35:43.972140534Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:35:43.982392 env[1589]: time="2025-09-13T01:35:43.982341165Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:35:43.982848 env[1589]: time="2025-09-13T01:35:43.982804285Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 13 01:35:43.990157 env[1589]: time="2025-09-13T01:35:43.989304119Z" level=info msg="CreateContainer within sandbox \"36ec86b7bd420f1578f3d659adcfce3eb8be084c214686b0fe60c605d7a2821f\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 13 01:35:44.003346 systemd-networkd[1771]: calia8b434c0622: Link UP Sep 13 01:35:44.010275 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 01:35:44.022277 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calia8b434c0622: link becomes ready Sep 13 01:35:44.022036 systemd-networkd[1771]: calia8b434c0622: Gained carrier Sep 13 01:35:44.065045 env[1589]: 2025-09-13 01:35:43.885 [INFO][4286] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--9d226ffbbf-k8s-coredns--7c65d6cfc9--7rjjf-eth0 coredns-7c65d6cfc9- kube-system 18937242-9ba8-46ed-bd00-32bfe4ab9056 921 0 2025-09-13 01:35:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510.3.8-n-9d226ffbbf coredns-7c65d6cfc9-7rjjf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia8b434c0622 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c62c376a5c77d2b35f664fec40a42c8dde6a124d088478329e2099a61c66e535" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7rjjf" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-coredns--7c65d6cfc9--7rjjf-" Sep 13 01:35:44.065045 env[1589]: 2025-09-13 01:35:43.885 [INFO][4286] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c62c376a5c77d2b35f664fec40a42c8dde6a124d088478329e2099a61c66e535" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7rjjf" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-coredns--7c65d6cfc9--7rjjf-eth0" Sep 13 01:35:44.065045 env[1589]: 2025-09-13 01:35:43.926 [INFO][4299] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c62c376a5c77d2b35f664fec40a42c8dde6a124d088478329e2099a61c66e535" HandleID="k8s-pod-network.c62c376a5c77d2b35f664fec40a42c8dde6a124d088478329e2099a61c66e535" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-coredns--7c65d6cfc9--7rjjf-eth0" Sep 13 01:35:44.065045 env[1589]: 2025-09-13 01:35:43.926 [INFO][4299] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c62c376a5c77d2b35f664fec40a42c8dde6a124d088478329e2099a61c66e535" HandleID="k8s-pod-network.c62c376a5c77d2b35f664fec40a42c8dde6a124d088478329e2099a61c66e535" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-coredns--7c65d6cfc9--7rjjf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024afa0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510.3.8-n-9d226ffbbf", "pod":"coredns-7c65d6cfc9-7rjjf", "timestamp":"2025-09-13 01:35:43.926317774 +0000 UTC"}, Hostname:"ci-3510.3.8-n-9d226ffbbf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 01:35:44.065045 env[1589]: 2025-09-13 01:35:43.926 [INFO][4299] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:35:44.065045 env[1589]: 2025-09-13 01:35:43.926 [INFO][4299] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:35:44.065045 env[1589]: 2025-09-13 01:35:43.926 [INFO][4299] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-9d226ffbbf' Sep 13 01:35:44.065045 env[1589]: 2025-09-13 01:35:43.936 [INFO][4299] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c62c376a5c77d2b35f664fec40a42c8dde6a124d088478329e2099a61c66e535" host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:44.065045 env[1589]: 2025-09-13 01:35:43.948 [INFO][4299] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:44.065045 env[1589]: 2025-09-13 01:35:43.952 [INFO][4299] ipam/ipam.go 511: Trying affinity for 192.168.44.64/26 host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:44.065045 env[1589]: 2025-09-13 01:35:43.954 [INFO][4299] ipam/ipam.go 158: Attempting to load block cidr=192.168.44.64/26 host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:44.065045 env[1589]: 2025-09-13 01:35:43.961 [INFO][4299] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.44.64/26 host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:44.065045 env[1589]: 2025-09-13 01:35:43.961 [INFO][4299] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.44.64/26 handle="k8s-pod-network.c62c376a5c77d2b35f664fec40a42c8dde6a124d088478329e2099a61c66e535" host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:44.065045 env[1589]: 2025-09-13 01:35:43.968 [INFO][4299] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c62c376a5c77d2b35f664fec40a42c8dde6a124d088478329e2099a61c66e535 Sep 13 01:35:44.065045 env[1589]: 2025-09-13 01:35:43.974 [INFO][4299] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.44.64/26 handle="k8s-pod-network.c62c376a5c77d2b35f664fec40a42c8dde6a124d088478329e2099a61c66e535" host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:44.065045 env[1589]: 2025-09-13 01:35:43.989 [INFO][4299] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.44.66/26] block=192.168.44.64/26 handle="k8s-pod-network.c62c376a5c77d2b35f664fec40a42c8dde6a124d088478329e2099a61c66e535" host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:44.065045 env[1589]: 2025-09-13 01:35:43.990 [INFO][4299] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.44.66/26] handle="k8s-pod-network.c62c376a5c77d2b35f664fec40a42c8dde6a124d088478329e2099a61c66e535" host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:44.065045 env[1589]: 2025-09-13 01:35:43.990 [INFO][4299] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:35:44.065045 env[1589]: 2025-09-13 01:35:43.990 [INFO][4299] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.44.66/26] IPv6=[] ContainerID="c62c376a5c77d2b35f664fec40a42c8dde6a124d088478329e2099a61c66e535" HandleID="k8s-pod-network.c62c376a5c77d2b35f664fec40a42c8dde6a124d088478329e2099a61c66e535" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-coredns--7c65d6cfc9--7rjjf-eth0" Sep 13 01:35:44.065640 env[1589]: 2025-09-13 01:35:43.992 [INFO][4286] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c62c376a5c77d2b35f664fec40a42c8dde6a124d088478329e2099a61c66e535" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7rjjf" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-coredns--7c65d6cfc9--7rjjf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--9d226ffbbf-k8s-coredns--7c65d6cfc9--7rjjf-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"18937242-9ba8-46ed-bd00-32bfe4ab9056", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 1, 35, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-9d226ffbbf", ContainerID:"", Pod:"coredns-7c65d6cfc9-7rjjf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia8b434c0622", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 01:35:44.065640 env[1589]: 2025-09-13 01:35:43.992 [INFO][4286] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.44.66/32] ContainerID="c62c376a5c77d2b35f664fec40a42c8dde6a124d088478329e2099a61c66e535" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7rjjf" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-coredns--7c65d6cfc9--7rjjf-eth0" Sep 13 01:35:44.065640 env[1589]: 2025-09-13 01:35:43.992 [INFO][4286] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia8b434c0622 ContainerID="c62c376a5c77d2b35f664fec40a42c8dde6a124d088478329e2099a61c66e535" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7rjjf" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-coredns--7c65d6cfc9--7rjjf-eth0" Sep 13 01:35:44.065640 env[1589]: 2025-09-13 01:35:44.026 [INFO][4286] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c62c376a5c77d2b35f664fec40a42c8dde6a124d088478329e2099a61c66e535" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7rjjf" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-coredns--7c65d6cfc9--7rjjf-eth0" Sep 13 01:35:44.065640 env[1589]: 2025-09-13 01:35:44.026 [INFO][4286] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c62c376a5c77d2b35f664fec40a42c8dde6a124d088478329e2099a61c66e535" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7rjjf" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-coredns--7c65d6cfc9--7rjjf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--9d226ffbbf-k8s-coredns--7c65d6cfc9--7rjjf-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"18937242-9ba8-46ed-bd00-32bfe4ab9056", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 1, 35, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-9d226ffbbf", ContainerID:"c62c376a5c77d2b35f664fec40a42c8dde6a124d088478329e2099a61c66e535", Pod:"coredns-7c65d6cfc9-7rjjf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia8b434c0622", MAC:"f2:0c:cf:09:13:f1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 01:35:44.065640 env[1589]: 2025-09-13 01:35:44.054 [INFO][4286] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c62c376a5c77d2b35f664fec40a42c8dde6a124d088478329e2099a61c66e535" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7rjjf" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-coredns--7c65d6cfc9--7rjjf-eth0" Sep 13 01:35:44.087050 env[1589]: time="2025-09-13T01:35:44.087002035Z" level=info msg="CreateContainer within sandbox \"36ec86b7bd420f1578f3d659adcfce3eb8be084c214686b0fe60c605d7a2821f\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"5281e114be5cdf5c7cf2ddffd6b2b6eabc597f2c705e982d8ce1a24f2a8e1aac\"" Sep 13 01:35:44.088064 env[1589]: time="2025-09-13T01:35:44.088036795Z" level=info msg="StartContainer for \"5281e114be5cdf5c7cf2ddffd6b2b6eabc597f2c705e982d8ce1a24f2a8e1aac\"" Sep 13 01:35:44.102110 env[1589]: time="2025-09-13T01:35:44.102037783Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:35:44.103288 env[1589]: time="2025-09-13T01:35:44.103253782Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:35:44.103445 env[1589]: time="2025-09-13T01:35:44.103422821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:35:44.104387 env[1589]: time="2025-09-13T01:35:44.104148981Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c62c376a5c77d2b35f664fec40a42c8dde6a124d088478329e2099a61c66e535 pid=4363 runtime=io.containerd.runc.v2 Sep 13 01:35:44.106000 audit[4349]: NETFILTER_CFG table=filter:112 family=2 entries=42 op=nft_register_chain pid=4349 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 01:35:44.106000 audit[4349]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=22552 a0=3 a1=ffffdc48f780 a2=0 a3=ffffadf4bfa8 items=0 ppid=4047 pid=4349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:44.106000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 01:35:44.129076 systemd-networkd[1771]: cali51f5d94bb4b: Link UP Sep 13 01:35:44.170317 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali51f5d94bb4b: link becomes ready Sep 13 01:35:44.169834 systemd-networkd[1771]: cali51f5d94bb4b: Gained carrier Sep 13 01:35:44.201505 env[1589]: time="2025-09-13T01:35:44.193839664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7rjjf,Uid:18937242-9ba8-46ed-bd00-32bfe4ab9056,Namespace:kube-system,Attempt:1,} returns sandbox id \"c62c376a5c77d2b35f664fec40a42c8dde6a124d088478329e2099a61c66e535\"" Sep 13 01:35:44.202931 env[1589]: time="2025-09-13T01:35:44.202898177Z" level=info msg="CreateContainer within sandbox \"c62c376a5c77d2b35f664fec40a42c8dde6a124d088478329e2099a61c66e535\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 01:35:44.207000 audit[4424]: NETFILTER_CFG table=filter:113 family=2 entries=40 op=nft_register_chain pid=4424 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 01:35:44.208088 env[1589]: 2025-09-13 01:35:43.974 [INFO][4303] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--9d226ffbbf-k8s-calico--kube--controllers--84d8c5c799--tjhw6-eth0 calico-kube-controllers-84d8c5c799- calico-system ccc7d775-178b-498c-91bf-f43ba47754ca 922 0 2025-09-13 01:35:19 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:84d8c5c799 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3510.3.8-n-9d226ffbbf calico-kube-controllers-84d8c5c799-tjhw6 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali51f5d94bb4b [] [] }} ContainerID="4c3a709df0f4ab3fadca13c159fb8895399a44dc7c3a1329459d9eff6c90e35f" Namespace="calico-system" Pod="calico-kube-controllers-84d8c5c799-tjhw6" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-calico--kube--controllers--84d8c5c799--tjhw6-" Sep 13 01:35:44.208088 env[1589]: 2025-09-13 01:35:43.974 [INFO][4303] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4c3a709df0f4ab3fadca13c159fb8895399a44dc7c3a1329459d9eff6c90e35f" Namespace="calico-system" Pod="calico-kube-controllers-84d8c5c799-tjhw6" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-calico--kube--controllers--84d8c5c799--tjhw6-eth0" Sep 13 01:35:44.208088 env[1589]: 2025-09-13 01:35:44.033 [INFO][4333] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4c3a709df0f4ab3fadca13c159fb8895399a44dc7c3a1329459d9eff6c90e35f" HandleID="k8s-pod-network.4c3a709df0f4ab3fadca13c159fb8895399a44dc7c3a1329459d9eff6c90e35f" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-calico--kube--controllers--84d8c5c799--tjhw6-eth0" Sep 13 01:35:44.208088 env[1589]: 2025-09-13 01:35:44.033 [INFO][4333] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4c3a709df0f4ab3fadca13c159fb8895399a44dc7c3a1329459d9eff6c90e35f" HandleID="k8s-pod-network.4c3a709df0f4ab3fadca13c159fb8895399a44dc7c3a1329459d9eff6c90e35f" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-calico--kube--controllers--84d8c5c799--tjhw6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cb020), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.8-n-9d226ffbbf", "pod":"calico-kube-controllers-84d8c5c799-tjhw6", "timestamp":"2025-09-13 01:35:44.033032762 +0000 UTC"}, Hostname:"ci-3510.3.8-n-9d226ffbbf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 01:35:44.208088 env[1589]: 2025-09-13 01:35:44.033 [INFO][4333] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:35:44.208088 env[1589]: 2025-09-13 01:35:44.033 [INFO][4333] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:35:44.208088 env[1589]: 2025-09-13 01:35:44.033 [INFO][4333] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-9d226ffbbf' Sep 13 01:35:44.208088 env[1589]: 2025-09-13 01:35:44.048 [INFO][4333] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4c3a709df0f4ab3fadca13c159fb8895399a44dc7c3a1329459d9eff6c90e35f" host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:44.208088 env[1589]: 2025-09-13 01:35:44.056 [INFO][4333] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:44.208088 env[1589]: 2025-09-13 01:35:44.081 [INFO][4333] ipam/ipam.go 511: Trying affinity for 192.168.44.64/26 host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:44.208088 env[1589]: 2025-09-13 01:35:44.083 [INFO][4333] ipam/ipam.go 158: Attempting to load block cidr=192.168.44.64/26 host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:44.208088 env[1589]: 2025-09-13 01:35:44.089 [INFO][4333] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.44.64/26 host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:44.208088 env[1589]: 2025-09-13 01:35:44.089 [INFO][4333] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.44.64/26 handle="k8s-pod-network.4c3a709df0f4ab3fadca13c159fb8895399a44dc7c3a1329459d9eff6c90e35f" host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:44.208088 env[1589]: 2025-09-13 01:35:44.097 [INFO][4333] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4c3a709df0f4ab3fadca13c159fb8895399a44dc7c3a1329459d9eff6c90e35f Sep 13 01:35:44.208088 env[1589]: 2025-09-13 01:35:44.102 [INFO][4333] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.44.64/26 handle="k8s-pod-network.4c3a709df0f4ab3fadca13c159fb8895399a44dc7c3a1329459d9eff6c90e35f" host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:44.208088 env[1589]: 2025-09-13 01:35:44.113 [INFO][4333] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.44.67/26] block=192.168.44.64/26 handle="k8s-pod-network.4c3a709df0f4ab3fadca13c159fb8895399a44dc7c3a1329459d9eff6c90e35f" host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:44.208088 env[1589]: 2025-09-13 01:35:44.113 [INFO][4333] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.44.67/26] handle="k8s-pod-network.4c3a709df0f4ab3fadca13c159fb8895399a44dc7c3a1329459d9eff6c90e35f" host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:44.208088 env[1589]: 2025-09-13 01:35:44.113 [INFO][4333] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:35:44.208088 env[1589]: 2025-09-13 01:35:44.113 [INFO][4333] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.44.67/26] IPv6=[] ContainerID="4c3a709df0f4ab3fadca13c159fb8895399a44dc7c3a1329459d9eff6c90e35f" HandleID="k8s-pod-network.4c3a709df0f4ab3fadca13c159fb8895399a44dc7c3a1329459d9eff6c90e35f" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-calico--kube--controllers--84d8c5c799--tjhw6-eth0" Sep 13 01:35:44.208632 env[1589]: 2025-09-13 01:35:44.117 [INFO][4303] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4c3a709df0f4ab3fadca13c159fb8895399a44dc7c3a1329459d9eff6c90e35f" Namespace="calico-system" Pod="calico-kube-controllers-84d8c5c799-tjhw6" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-calico--kube--controllers--84d8c5c799--tjhw6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--9d226ffbbf-k8s-calico--kube--controllers--84d8c5c799--tjhw6-eth0", GenerateName:"calico-kube-controllers-84d8c5c799-", Namespace:"calico-system", SelfLink:"", UID:"ccc7d775-178b-498c-91bf-f43ba47754ca", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 1, 35, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84d8c5c799", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-9d226ffbbf", ContainerID:"", Pod:"calico-kube-controllers-84d8c5c799-tjhw6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.44.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali51f5d94bb4b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 01:35:44.208632 env[1589]: 2025-09-13 01:35:44.118 [INFO][4303] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.44.67/32] ContainerID="4c3a709df0f4ab3fadca13c159fb8895399a44dc7c3a1329459d9eff6c90e35f" Namespace="calico-system" Pod="calico-kube-controllers-84d8c5c799-tjhw6" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-calico--kube--controllers--84d8c5c799--tjhw6-eth0" Sep 13 01:35:44.208632 env[1589]: 2025-09-13 01:35:44.118 [INFO][4303] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali51f5d94bb4b ContainerID="4c3a709df0f4ab3fadca13c159fb8895399a44dc7c3a1329459d9eff6c90e35f" Namespace="calico-system" Pod="calico-kube-controllers-84d8c5c799-tjhw6" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-calico--kube--controllers--84d8c5c799--tjhw6-eth0" Sep 13 01:35:44.208632 env[1589]: 2025-09-13 01:35:44.171 [INFO][4303] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4c3a709df0f4ab3fadca13c159fb8895399a44dc7c3a1329459d9eff6c90e35f" Namespace="calico-system" Pod="calico-kube-controllers-84d8c5c799-tjhw6" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-calico--kube--controllers--84d8c5c799--tjhw6-eth0" Sep 13 01:35:44.208632 env[1589]: 2025-09-13 01:35:44.177 [INFO][4303] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4c3a709df0f4ab3fadca13c159fb8895399a44dc7c3a1329459d9eff6c90e35f" Namespace="calico-system" Pod="calico-kube-controllers-84d8c5c799-tjhw6" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-calico--kube--controllers--84d8c5c799--tjhw6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--9d226ffbbf-k8s-calico--kube--controllers--84d8c5c799--tjhw6-eth0", GenerateName:"calico-kube-controllers-84d8c5c799-", Namespace:"calico-system", SelfLink:"", UID:"ccc7d775-178b-498c-91bf-f43ba47754ca", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 1, 35, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84d8c5c799", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-9d226ffbbf", ContainerID:"4c3a709df0f4ab3fadca13c159fb8895399a44dc7c3a1329459d9eff6c90e35f", Pod:"calico-kube-controllers-84d8c5c799-tjhw6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.44.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali51f5d94bb4b", MAC:"ba:9a:a3:79:ee:d1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 01:35:44.208632 env[1589]: 2025-09-13 01:35:44.202 [INFO][4303] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4c3a709df0f4ab3fadca13c159fb8895399a44dc7c3a1329459d9eff6c90e35f" Namespace="calico-system" Pod="calico-kube-controllers-84d8c5c799-tjhw6" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-calico--kube--controllers--84d8c5c799--tjhw6-eth0" Sep 13 01:35:44.207000 audit[4424]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=20764 a0=3 a1=ffffd033a260 a2=0 a3=ffff96474fa8 items=0 ppid=4047 pid=4424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:44.207000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 01:35:44.253342 env[1589]: time="2025-09-13T01:35:44.253298654Z" level=info msg="StartContainer for \"5281e114be5cdf5c7cf2ddffd6b2b6eabc597f2c705e982d8ce1a24f2a8e1aac\" returns successfully" Sep 13 01:35:44.256376 env[1589]: time="2025-09-13T01:35:44.256336051Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 13 01:35:44.265956 env[1589]: time="2025-09-13T01:35:44.265863523Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:35:44.266105 env[1589]: time="2025-09-13T01:35:44.265929683Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:35:44.266350 env[1589]: time="2025-09-13T01:35:44.266169643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:35:44.267240 env[1589]: time="2025-09-13T01:35:44.267134522Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4c3a709df0f4ab3fadca13c159fb8895399a44dc7c3a1329459d9eff6c90e35f pid=4448 runtime=io.containerd.runc.v2 Sep 13 01:35:44.274748 env[1589]: time="2025-09-13T01:35:44.274697675Z" level=info msg="CreateContainer within sandbox \"c62c376a5c77d2b35f664fec40a42c8dde6a124d088478329e2099a61c66e535\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bee31ca5dad73c4da946e842d8048a7f5d30fb5af47efbe8d776244c2d024224\"" Sep 13 01:35:44.277042 env[1589]: time="2025-09-13T01:35:44.277002233Z" level=info msg="StartContainer for \"bee31ca5dad73c4da946e842d8048a7f5d30fb5af47efbe8d776244c2d024224\"" Sep 13 01:35:44.288638 systemd-networkd[1771]: cali6fc3f020bef: Link UP Sep 13 01:35:44.300301 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali6fc3f020bef: link becomes ready Sep 13 01:35:44.301669 systemd-networkd[1771]: cali6fc3f020bef: Gained carrier Sep 13 01:35:44.320603 env[1589]: 2025-09-13 01:35:44.083 [INFO][4318] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dx4zf-eth0 calico-apiserver-948d44db6- calico-apiserver 42ab5820-ae33-4608-be70-9aaefef7b587 923 0 2025-09-13 01:35:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:948d44db6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510.3.8-n-9d226ffbbf calico-apiserver-948d44db6-dx4zf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6fc3f020bef [] [] }} ContainerID="416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864" Namespace="calico-apiserver" Pod="calico-apiserver-948d44db6-dx4zf" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dx4zf-" Sep 13 01:35:44.320603 env[1589]: 2025-09-13 01:35:44.085 [INFO][4318] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864" Namespace="calico-apiserver" Pod="calico-apiserver-948d44db6-dx4zf" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dx4zf-eth0" Sep 13 01:35:44.320603 env[1589]: 2025-09-13 01:35:44.213 [INFO][4371] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864" HandleID="k8s-pod-network.416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dx4zf-eth0" Sep 13 01:35:44.320603 env[1589]: 2025-09-13 01:35:44.213 [INFO][4371] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864" HandleID="k8s-pod-network.416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dx4zf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003382a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510.3.8-n-9d226ffbbf", "pod":"calico-apiserver-948d44db6-dx4zf", "timestamp":"2025-09-13 01:35:44.213662647 +0000 UTC"}, Hostname:"ci-3510.3.8-n-9d226ffbbf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 01:35:44.320603 env[1589]: 2025-09-13 01:35:44.213 [INFO][4371] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:35:44.320603 env[1589]: 2025-09-13 01:35:44.213 [INFO][4371] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:35:44.320603 env[1589]: 2025-09-13 01:35:44.214 [INFO][4371] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-9d226ffbbf' Sep 13 01:35:44.320603 env[1589]: 2025-09-13 01:35:44.232 [INFO][4371] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864" host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:44.320603 env[1589]: 2025-09-13 01:35:44.238 [INFO][4371] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:44.320603 env[1589]: 2025-09-13 01:35:44.246 [INFO][4371] ipam/ipam.go 511: Trying affinity for 192.168.44.64/26 host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:44.320603 env[1589]: 2025-09-13 01:35:44.250 [INFO][4371] ipam/ipam.go 158: Attempting to load block cidr=192.168.44.64/26 host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:44.320603 env[1589]: 2025-09-13 01:35:44.254 [INFO][4371] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.44.64/26 host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:44.320603 env[1589]: 2025-09-13 01:35:44.254 [INFO][4371] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.44.64/26 handle="k8s-pod-network.416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864" host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:44.320603 env[1589]: 2025-09-13 01:35:44.261 [INFO][4371] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864 Sep 13 01:35:44.320603 env[1589]: 2025-09-13 01:35:44.269 [INFO][4371] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.44.64/26 handle="k8s-pod-network.416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864" host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:44.320603 env[1589]: 2025-09-13 01:35:44.281 [INFO][4371] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.44.68/26] block=192.168.44.64/26 handle="k8s-pod-network.416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864" host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:44.320603 env[1589]: 2025-09-13 01:35:44.281 [INFO][4371] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.44.68/26] handle="k8s-pod-network.416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864" host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:44.320603 env[1589]: 2025-09-13 01:35:44.281 [INFO][4371] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:35:44.320603 env[1589]: 2025-09-13 01:35:44.281 [INFO][4371] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.44.68/26] IPv6=[] ContainerID="416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864" HandleID="k8s-pod-network.416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dx4zf-eth0" Sep 13 01:35:44.321164 env[1589]: 2025-09-13 01:35:44.283 [INFO][4318] cni-plugin/k8s.go 418: Populated endpoint ContainerID="416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864" Namespace="calico-apiserver" Pod="calico-apiserver-948d44db6-dx4zf" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dx4zf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dx4zf-eth0", GenerateName:"calico-apiserver-948d44db6-", Namespace:"calico-apiserver", SelfLink:"", UID:"42ab5820-ae33-4608-be70-9aaefef7b587", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 1, 35, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"948d44db6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-9d226ffbbf", ContainerID:"", Pod:"calico-apiserver-948d44db6-dx4zf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.44.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6fc3f020bef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 01:35:44.321164 env[1589]: 2025-09-13 01:35:44.283 [INFO][4318] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.44.68/32] ContainerID="416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864" Namespace="calico-apiserver" Pod="calico-apiserver-948d44db6-dx4zf" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dx4zf-eth0" Sep 13 01:35:44.321164 env[1589]: 2025-09-13 01:35:44.283 [INFO][4318] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6fc3f020bef ContainerID="416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864" Namespace="calico-apiserver" Pod="calico-apiserver-948d44db6-dx4zf" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dx4zf-eth0" Sep 13 01:35:44.321164 env[1589]: 2025-09-13 01:35:44.302 [INFO][4318] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864" Namespace="calico-apiserver" Pod="calico-apiserver-948d44db6-dx4zf" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dx4zf-eth0" Sep 13 01:35:44.321164 env[1589]: 2025-09-13 01:35:44.303 [INFO][4318] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864" Namespace="calico-apiserver" Pod="calico-apiserver-948d44db6-dx4zf" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dx4zf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dx4zf-eth0", GenerateName:"calico-apiserver-948d44db6-", Namespace:"calico-apiserver", SelfLink:"", UID:"42ab5820-ae33-4608-be70-9aaefef7b587", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 1, 35, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"948d44db6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-9d226ffbbf", ContainerID:"416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864", Pod:"calico-apiserver-948d44db6-dx4zf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.44.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6fc3f020bef", MAC:"5a:06:8f:5b:c0:ee", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 01:35:44.321164 env[1589]: 2025-09-13 01:35:44.318 [INFO][4318] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864" Namespace="calico-apiserver" Pod="calico-apiserver-948d44db6-dx4zf" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dx4zf-eth0" Sep 13 01:35:44.354000 audit[4514]: NETFILTER_CFG table=filter:114 family=2 entries=64 op=nft_register_chain pid=4514 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 01:35:44.354000 audit[4514]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=33436 a0=3 a1=fffffdcf9ea0 a2=0 a3=ffff9aa9cfa8 items=0 ppid=4047 pid=4514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:44.354000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 01:35:44.366266 env[1589]: time="2025-09-13T01:35:44.366175277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84d8c5c799-tjhw6,Uid:ccc7d775-178b-498c-91bf-f43ba47754ca,Namespace:calico-system,Attempt:1,} returns sandbox id \"4c3a709df0f4ab3fadca13c159fb8895399a44dc7c3a1329459d9eff6c90e35f\"" Sep 13 01:35:44.377414 env[1589]: time="2025-09-13T01:35:44.377332548Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:35:44.377622 env[1589]: time="2025-09-13T01:35:44.377599268Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:35:44.377743 env[1589]: time="2025-09-13T01:35:44.377722107Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:35:44.377982 env[1589]: time="2025-09-13T01:35:44.377947307Z" level=info msg="StartContainer for \"bee31ca5dad73c4da946e842d8048a7f5d30fb5af47efbe8d776244c2d024224\" returns successfully" Sep 13 01:35:44.382281 env[1589]: time="2025-09-13T01:35:44.380774145Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864 pid=4528 runtime=io.containerd.runc.v2 Sep 13 01:35:44.437690 env[1589]: time="2025-09-13T01:35:44.437639576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-948d44db6-dx4zf,Uid:42ab5820-ae33-4608-be70-9aaefef7b587,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864\"" Sep 13 01:35:44.610866 env[1589]: time="2025-09-13T01:35:44.605927473Z" level=info msg="StopPodSandbox for \"e937a3a4e8ae545f5505b58e90df5c48b4537a09f9e7645597c293713bfc65bc\"" Sep 13 01:35:44.610866 env[1589]: time="2025-09-13T01:35:44.607772271Z" level=info msg="StopPodSandbox for \"3ada6d813106c69830ad8c1fd9059090e1365bc7e78192a38fc967fc1cbf08be\"" Sep 13 01:35:44.714821 env[1589]: 2025-09-13 01:35:44.673 [INFO][4591] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e937a3a4e8ae545f5505b58e90df5c48b4537a09f9e7645597c293713bfc65bc" Sep 13 01:35:44.714821 env[1589]: 2025-09-13 01:35:44.673 [INFO][4591] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e937a3a4e8ae545f5505b58e90df5c48b4537a09f9e7645597c293713bfc65bc" iface="eth0" netns="/var/run/netns/cni-9fcf2891-d83c-5cdd-56cf-e40707f1dd0f" Sep 13 01:35:44.714821 env[1589]: 2025-09-13 01:35:44.673 [INFO][4591] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e937a3a4e8ae545f5505b58e90df5c48b4537a09f9e7645597c293713bfc65bc" iface="eth0" netns="/var/run/netns/cni-9fcf2891-d83c-5cdd-56cf-e40707f1dd0f" Sep 13 01:35:44.714821 env[1589]: 2025-09-13 01:35:44.673 [INFO][4591] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e937a3a4e8ae545f5505b58e90df5c48b4537a09f9e7645597c293713bfc65bc" iface="eth0" netns="/var/run/netns/cni-9fcf2891-d83c-5cdd-56cf-e40707f1dd0f" Sep 13 01:35:44.714821 env[1589]: 2025-09-13 01:35:44.673 [INFO][4591] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e937a3a4e8ae545f5505b58e90df5c48b4537a09f9e7645597c293713bfc65bc" Sep 13 01:35:44.714821 env[1589]: 2025-09-13 01:35:44.673 [INFO][4591] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e937a3a4e8ae545f5505b58e90df5c48b4537a09f9e7645597c293713bfc65bc" Sep 13 01:35:44.714821 env[1589]: 2025-09-13 01:35:44.700 [INFO][4604] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e937a3a4e8ae545f5505b58e90df5c48b4537a09f9e7645597c293713bfc65bc" HandleID="k8s-pod-network.e937a3a4e8ae545f5505b58e90df5c48b4537a09f9e7645597c293713bfc65bc" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-goldmane--7988f88666--2qlzv-eth0" Sep 13 01:35:44.714821 env[1589]: 2025-09-13 01:35:44.700 [INFO][4604] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:35:44.714821 env[1589]: 2025-09-13 01:35:44.700 [INFO][4604] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:35:44.714821 env[1589]: 2025-09-13 01:35:44.709 [WARNING][4604] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e937a3a4e8ae545f5505b58e90df5c48b4537a09f9e7645597c293713bfc65bc" HandleID="k8s-pod-network.e937a3a4e8ae545f5505b58e90df5c48b4537a09f9e7645597c293713bfc65bc" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-goldmane--7988f88666--2qlzv-eth0" Sep 13 01:35:44.714821 env[1589]: 2025-09-13 01:35:44.709 [INFO][4604] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e937a3a4e8ae545f5505b58e90df5c48b4537a09f9e7645597c293713bfc65bc" HandleID="k8s-pod-network.e937a3a4e8ae545f5505b58e90df5c48b4537a09f9e7645597c293713bfc65bc" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-goldmane--7988f88666--2qlzv-eth0" Sep 13 01:35:44.714821 env[1589]: 2025-09-13 01:35:44.710 [INFO][4604] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:35:44.714821 env[1589]: 2025-09-13 01:35:44.712 [INFO][4591] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e937a3a4e8ae545f5505b58e90df5c48b4537a09f9e7645597c293713bfc65bc" Sep 13 01:35:44.715337 env[1589]: time="2025-09-13T01:35:44.714947340Z" level=info msg="TearDown network for sandbox \"e937a3a4e8ae545f5505b58e90df5c48b4537a09f9e7645597c293713bfc65bc\" successfully" Sep 13 01:35:44.715337 env[1589]: time="2025-09-13T01:35:44.714979020Z" level=info msg="StopPodSandbox for \"e937a3a4e8ae545f5505b58e90df5c48b4537a09f9e7645597c293713bfc65bc\" returns successfully" Sep 13 01:35:44.715974 env[1589]: time="2025-09-13T01:35:44.715946459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-2qlzv,Uid:e1b67e48-bb3b-4ff4-9ed0-8953d7a15f03,Namespace:calico-system,Attempt:1,}" Sep 13 01:35:44.748539 env[1589]: 2025-09-13 01:35:44.694 [INFO][4592] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3ada6d813106c69830ad8c1fd9059090e1365bc7e78192a38fc967fc1cbf08be" Sep 13 01:35:44.748539 env[1589]: 2025-09-13 01:35:44.694 [INFO][4592] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3ada6d813106c69830ad8c1fd9059090e1365bc7e78192a38fc967fc1cbf08be" iface="eth0" netns="/var/run/netns/cni-7077bc90-a2df-6798-9966-120dea1000e7" Sep 13 01:35:44.748539 env[1589]: 2025-09-13 01:35:44.694 [INFO][4592] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3ada6d813106c69830ad8c1fd9059090e1365bc7e78192a38fc967fc1cbf08be" iface="eth0" netns="/var/run/netns/cni-7077bc90-a2df-6798-9966-120dea1000e7" Sep 13 01:35:44.748539 env[1589]: 2025-09-13 01:35:44.694 [INFO][4592] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3ada6d813106c69830ad8c1fd9059090e1365bc7e78192a38fc967fc1cbf08be" iface="eth0" netns="/var/run/netns/cni-7077bc90-a2df-6798-9966-120dea1000e7" Sep 13 01:35:44.748539 env[1589]: 2025-09-13 01:35:44.694 [INFO][4592] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3ada6d813106c69830ad8c1fd9059090e1365bc7e78192a38fc967fc1cbf08be" Sep 13 01:35:44.748539 env[1589]: 2025-09-13 01:35:44.694 [INFO][4592] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3ada6d813106c69830ad8c1fd9059090e1365bc7e78192a38fc967fc1cbf08be" Sep 13 01:35:44.748539 env[1589]: 2025-09-13 01:35:44.730 [INFO][4611] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3ada6d813106c69830ad8c1fd9059090e1365bc7e78192a38fc967fc1cbf08be" HandleID="k8s-pod-network.3ada6d813106c69830ad8c1fd9059090e1365bc7e78192a38fc967fc1cbf08be" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-csi--node--driver--vpflq-eth0" Sep 13 01:35:44.748539 env[1589]: 2025-09-13 01:35:44.730 [INFO][4611] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:35:44.748539 env[1589]: 2025-09-13 01:35:44.730 [INFO][4611] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:35:44.748539 env[1589]: 2025-09-13 01:35:44.741 [WARNING][4611] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3ada6d813106c69830ad8c1fd9059090e1365bc7e78192a38fc967fc1cbf08be" HandleID="k8s-pod-network.3ada6d813106c69830ad8c1fd9059090e1365bc7e78192a38fc967fc1cbf08be" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-csi--node--driver--vpflq-eth0" Sep 13 01:35:44.748539 env[1589]: 2025-09-13 01:35:44.741 [INFO][4611] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3ada6d813106c69830ad8c1fd9059090e1365bc7e78192a38fc967fc1cbf08be" HandleID="k8s-pod-network.3ada6d813106c69830ad8c1fd9059090e1365bc7e78192a38fc967fc1cbf08be" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-csi--node--driver--vpflq-eth0" Sep 13 01:35:44.748539 env[1589]: 2025-09-13 01:35:44.743 [INFO][4611] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:35:44.748539 env[1589]: 2025-09-13 01:35:44.746 [INFO][4592] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3ada6d813106c69830ad8c1fd9059090e1365bc7e78192a38fc967fc1cbf08be" Sep 13 01:35:44.749113 env[1589]: time="2025-09-13T01:35:44.749083271Z" level=info msg="TearDown network for sandbox \"3ada6d813106c69830ad8c1fd9059090e1365bc7e78192a38fc967fc1cbf08be\" successfully" Sep 13 01:35:44.749209 env[1589]: time="2025-09-13T01:35:44.749168511Z" level=info msg="StopPodSandbox for \"3ada6d813106c69830ad8c1fd9059090e1365bc7e78192a38fc967fc1cbf08be\" returns successfully" Sep 13 01:35:44.750000 env[1589]: time="2025-09-13T01:35:44.749963910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vpflq,Uid:77870db6-b52e-4395-a518-9c1b7d66eb0e,Namespace:calico-system,Attempt:1,}" Sep 13 01:35:44.779590 systemd[1]: run-netns-cni\x2d7077bc90\x2da2df\x2d6798\x2d9966\x2d120dea1000e7.mount: Deactivated successfully. Sep 13 01:35:44.780034 systemd[1]: run-netns-cni\x2d9fcf2891\x2dd83c\x2d5cdd\x2d56cf\x2de40707f1dd0f.mount: Deactivated successfully. Sep 13 01:35:44.909322 systemd-networkd[1771]: calib51311b43a6: Link UP Sep 13 01:35:44.919772 systemd-networkd[1771]: calib51311b43a6: Gained carrier Sep 13 01:35:44.920243 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calib51311b43a6: link becomes ready Sep 13 01:35:44.943821 env[1589]: 2025-09-13 01:35:44.819 [INFO][4618] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--9d226ffbbf-k8s-goldmane--7988f88666--2qlzv-eth0 goldmane-7988f88666- calico-system e1b67e48-bb3b-4ff4-9ed0-8953d7a15f03 946 0 2025-09-13 01:35:19 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-3510.3.8-n-9d226ffbbf goldmane-7988f88666-2qlzv eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calib51311b43a6 [] [] }} ContainerID="0c577c4edccda0970c1c93abe14e13706145b9107ba821e85dc669b14621bbdf" Namespace="calico-system" Pod="goldmane-7988f88666-2qlzv" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-goldmane--7988f88666--2qlzv-" Sep 13 01:35:44.943821 env[1589]: 2025-09-13 01:35:44.819 [INFO][4618] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0c577c4edccda0970c1c93abe14e13706145b9107ba821e85dc669b14621bbdf" Namespace="calico-system" Pod="goldmane-7988f88666-2qlzv" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-goldmane--7988f88666--2qlzv-eth0" Sep 13 01:35:44.943821 env[1589]: 2025-09-13 01:35:44.854 [INFO][4630] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0c577c4edccda0970c1c93abe14e13706145b9107ba821e85dc669b14621bbdf" HandleID="k8s-pod-network.0c577c4edccda0970c1c93abe14e13706145b9107ba821e85dc669b14621bbdf" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-goldmane--7988f88666--2qlzv-eth0" Sep 13 01:35:44.943821 env[1589]: 2025-09-13 01:35:44.854 [INFO][4630] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0c577c4edccda0970c1c93abe14e13706145b9107ba821e85dc669b14621bbdf" HandleID="k8s-pod-network.0c577c4edccda0970c1c93abe14e13706145b9107ba821e85dc669b14621bbdf" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-goldmane--7988f88666--2qlzv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3690), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.8-n-9d226ffbbf", "pod":"goldmane-7988f88666-2qlzv", "timestamp":"2025-09-13 01:35:44.854083621 +0000 UTC"}, Hostname:"ci-3510.3.8-n-9d226ffbbf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 01:35:44.943821 env[1589]: 2025-09-13 01:35:44.854 [INFO][4630] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:35:44.943821 env[1589]: 2025-09-13 01:35:44.854 [INFO][4630] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:35:44.943821 env[1589]: 2025-09-13 01:35:44.854 [INFO][4630] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-9d226ffbbf' Sep 13 01:35:44.943821 env[1589]: 2025-09-13 01:35:44.864 [INFO][4630] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0c577c4edccda0970c1c93abe14e13706145b9107ba821e85dc669b14621bbdf" host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:44.943821 env[1589]: 2025-09-13 01:35:44.870 [INFO][4630] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:44.943821 env[1589]: 2025-09-13 01:35:44.876 [INFO][4630] ipam/ipam.go 511: Trying affinity for 192.168.44.64/26 host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:44.943821 env[1589]: 2025-09-13 01:35:44.878 [INFO][4630] ipam/ipam.go 158: Attempting to load block cidr=192.168.44.64/26 host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:44.943821 env[1589]: 2025-09-13 01:35:44.880 [INFO][4630] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.44.64/26 host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:44.943821 env[1589]: 2025-09-13 01:35:44.880 [INFO][4630] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.44.64/26 handle="k8s-pod-network.0c577c4edccda0970c1c93abe14e13706145b9107ba821e85dc669b14621bbdf" host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:44.943821 env[1589]: 2025-09-13 01:35:44.881 [INFO][4630] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0c577c4edccda0970c1c93abe14e13706145b9107ba821e85dc669b14621bbdf Sep 13 01:35:44.943821 env[1589]: 2025-09-13 01:35:44.889 [INFO][4630] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.44.64/26 handle="k8s-pod-network.0c577c4edccda0970c1c93abe14e13706145b9107ba821e85dc669b14621bbdf" host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:44.943821 env[1589]: 2025-09-13 01:35:44.897 [INFO][4630] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.44.69/26] block=192.168.44.64/26 handle="k8s-pod-network.0c577c4edccda0970c1c93abe14e13706145b9107ba821e85dc669b14621bbdf" host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:44.943821 env[1589]: 2025-09-13 01:35:44.897 [INFO][4630] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.44.69/26] handle="k8s-pod-network.0c577c4edccda0970c1c93abe14e13706145b9107ba821e85dc669b14621bbdf" host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:44.943821 env[1589]: 2025-09-13 01:35:44.897 [INFO][4630] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:35:44.943821 env[1589]: 2025-09-13 01:35:44.897 [INFO][4630] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.44.69/26] IPv6=[] ContainerID="0c577c4edccda0970c1c93abe14e13706145b9107ba821e85dc669b14621bbdf" HandleID="k8s-pod-network.0c577c4edccda0970c1c93abe14e13706145b9107ba821e85dc669b14621bbdf" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-goldmane--7988f88666--2qlzv-eth0" Sep 13 01:35:44.944468 env[1589]: 2025-09-13 01:35:44.902 [INFO][4618] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0c577c4edccda0970c1c93abe14e13706145b9107ba821e85dc669b14621bbdf" Namespace="calico-system" Pod="goldmane-7988f88666-2qlzv" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-goldmane--7988f88666--2qlzv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--9d226ffbbf-k8s-goldmane--7988f88666--2qlzv-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"e1b67e48-bb3b-4ff4-9ed0-8953d7a15f03", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 1, 35, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-9d226ffbbf", ContainerID:"", Pod:"goldmane-7988f88666-2qlzv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.44.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib51311b43a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 01:35:44.944468 env[1589]: 2025-09-13 01:35:44.902 [INFO][4618] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.44.69/32] ContainerID="0c577c4edccda0970c1c93abe14e13706145b9107ba821e85dc669b14621bbdf" Namespace="calico-system" Pod="goldmane-7988f88666-2qlzv" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-goldmane--7988f88666--2qlzv-eth0" Sep 13 01:35:44.944468 env[1589]: 2025-09-13 01:35:44.903 [INFO][4618] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib51311b43a6 ContainerID="0c577c4edccda0970c1c93abe14e13706145b9107ba821e85dc669b14621bbdf" Namespace="calico-system" Pod="goldmane-7988f88666-2qlzv" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-goldmane--7988f88666--2qlzv-eth0" Sep 13 01:35:44.944468 env[1589]: 2025-09-13 01:35:44.920 [INFO][4618] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0c577c4edccda0970c1c93abe14e13706145b9107ba821e85dc669b14621bbdf" Namespace="calico-system" Pod="goldmane-7988f88666-2qlzv" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-goldmane--7988f88666--2qlzv-eth0" Sep 13 01:35:44.944468 env[1589]: 2025-09-13 01:35:44.922 [INFO][4618] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0c577c4edccda0970c1c93abe14e13706145b9107ba821e85dc669b14621bbdf" Namespace="calico-system" Pod="goldmane-7988f88666-2qlzv" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-goldmane--7988f88666--2qlzv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--9d226ffbbf-k8s-goldmane--7988f88666--2qlzv-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"e1b67e48-bb3b-4ff4-9ed0-8953d7a15f03", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 1, 35, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-9d226ffbbf", ContainerID:"0c577c4edccda0970c1c93abe14e13706145b9107ba821e85dc669b14621bbdf", Pod:"goldmane-7988f88666-2qlzv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.44.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib51311b43a6", MAC:"76:c6:0b:91:d2:ec", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 01:35:44.944468 env[1589]: 2025-09-13 01:35:44.941 [INFO][4618] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0c577c4edccda0970c1c93abe14e13706145b9107ba821e85dc669b14621bbdf" Namespace="calico-system" Pod="goldmane-7988f88666-2qlzv" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-goldmane--7988f88666--2qlzv-eth0" Sep 13 01:35:44.967173 env[1589]: time="2025-09-13T01:35:44.966553205Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:35:44.967173 env[1589]: time="2025-09-13T01:35:44.966610205Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:35:44.967173 env[1589]: time="2025-09-13T01:35:44.966620685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:35:44.967173 env[1589]: time="2025-09-13T01:35:44.966754885Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0c577c4edccda0970c1c93abe14e13706145b9107ba821e85dc669b14621bbdf pid=4670 runtime=io.containerd.runc.v2 Sep 13 01:35:45.001000 audit[4691]: NETFILTER_CFG table=filter:115 family=2 entries=52 op=nft_register_chain pid=4691 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 01:35:45.001000 audit[4691]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=27540 a0=3 a1=ffffde45b3c0 a2=0 a3=ffffb8708fa8 items=0 ppid=4047 pid=4691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:45.001000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 01:35:45.031097 kubelet[2688]: I0913 01:35:45.030491 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-7rjjf" podStartSLOduration=41.030472831 podStartE2EDuration="41.030472831s" podCreationTimestamp="2025-09-13 01:35:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 01:35:45.029285992 +0000 UTC m=+46.530124224" watchObservedRunningTime="2025-09-13 01:35:45.030472831 +0000 UTC m=+46.531311063" Sep 13 01:35:45.056285 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 01:35:45.056430 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calid79375b0230: link becomes ready Sep 13 01:35:45.057899 systemd-networkd[1771]: calid79375b0230: Link UP Sep 13 01:35:45.058060 systemd-networkd[1771]: calid79375b0230: Gained carrier Sep 13 01:35:45.068000 audit[4696]: NETFILTER_CFG table=filter:116 family=2 entries=20 op=nft_register_rule pid=4696 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:35:45.068000 audit[4696]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffeb2d1dd0 a2=0 a3=1 items=0 ppid=2833 pid=4696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:45.068000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:35:45.075000 audit[4696]: NETFILTER_CFG table=nat:117 family=2 entries=14 op=nft_register_rule pid=4696 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:35:45.075000 audit[4696]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffeb2d1dd0 a2=0 a3=1 items=0 ppid=2833 pid=4696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:45.075000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:35:45.083715 env[1589]: 2025-09-13 01:35:44.919 [INFO][4635] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--9d226ffbbf-k8s-csi--node--driver--vpflq-eth0 csi-node-driver- calico-system 77870db6-b52e-4395-a518-9c1b7d66eb0e 947 0 2025-09-13 01:35:19 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-3510.3.8-n-9d226ffbbf csi-node-driver-vpflq eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calid79375b0230 [] [] }} ContainerID="f7df422889549a38e3a77aa40d7bf2af4be070fe6f91662211c411a2ed3b12ce" Namespace="calico-system" Pod="csi-node-driver-vpflq" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-csi--node--driver--vpflq-" Sep 13 01:35:45.083715 env[1589]: 2025-09-13 01:35:44.919 [INFO][4635] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f7df422889549a38e3a77aa40d7bf2af4be070fe6f91662211c411a2ed3b12ce" Namespace="calico-system" Pod="csi-node-driver-vpflq" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-csi--node--driver--vpflq-eth0" Sep 13 01:35:45.083715 env[1589]: 2025-09-13 01:35:44.954 [INFO][4650] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f7df422889549a38e3a77aa40d7bf2af4be070fe6f91662211c411a2ed3b12ce" HandleID="k8s-pod-network.f7df422889549a38e3a77aa40d7bf2af4be070fe6f91662211c411a2ed3b12ce" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-csi--node--driver--vpflq-eth0" Sep 13 01:35:45.083715 env[1589]: 2025-09-13 01:35:44.954 [INFO][4650] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f7df422889549a38e3a77aa40d7bf2af4be070fe6f91662211c411a2ed3b12ce" HandleID="k8s-pod-network.f7df422889549a38e3a77aa40d7bf2af4be070fe6f91662211c411a2ed3b12ce" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-csi--node--driver--vpflq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ab490), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.8-n-9d226ffbbf", "pod":"csi-node-driver-vpflq", "timestamp":"2025-09-13 01:35:44.953965016 +0000 UTC"}, Hostname:"ci-3510.3.8-n-9d226ffbbf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 01:35:45.083715 env[1589]: 2025-09-13 01:35:44.955 [INFO][4650] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:35:45.083715 env[1589]: 2025-09-13 01:35:44.955 [INFO][4650] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:35:45.083715 env[1589]: 2025-09-13 01:35:44.955 [INFO][4650] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-9d226ffbbf' Sep 13 01:35:45.083715 env[1589]: 2025-09-13 01:35:44.965 [INFO][4650] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f7df422889549a38e3a77aa40d7bf2af4be070fe6f91662211c411a2ed3b12ce" host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:45.083715 env[1589]: 2025-09-13 01:35:44.972 [INFO][4650] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:45.083715 env[1589]: 2025-09-13 01:35:44.978 [INFO][4650] ipam/ipam.go 511: Trying affinity for 192.168.44.64/26 host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:45.083715 env[1589]: 2025-09-13 01:35:44.982 [INFO][4650] ipam/ipam.go 158: Attempting to load block cidr=192.168.44.64/26 host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:45.083715 env[1589]: 2025-09-13 01:35:44.986 [INFO][4650] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.44.64/26 host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:45.083715 env[1589]: 2025-09-13 01:35:44.986 [INFO][4650] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.44.64/26 handle="k8s-pod-network.f7df422889549a38e3a77aa40d7bf2af4be070fe6f91662211c411a2ed3b12ce" host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:45.083715 env[1589]: 2025-09-13 01:35:44.988 [INFO][4650] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f7df422889549a38e3a77aa40d7bf2af4be070fe6f91662211c411a2ed3b12ce Sep 13 01:35:45.083715 env[1589]: 2025-09-13 01:35:44.997 [INFO][4650] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.44.64/26 handle="k8s-pod-network.f7df422889549a38e3a77aa40d7bf2af4be070fe6f91662211c411a2ed3b12ce" host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:45.083715 env[1589]: 2025-09-13 01:35:45.008 [INFO][4650] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.44.70/26] block=192.168.44.64/26 handle="k8s-pod-network.f7df422889549a38e3a77aa40d7bf2af4be070fe6f91662211c411a2ed3b12ce" host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:45.083715 env[1589]: 2025-09-13 01:35:45.008 [INFO][4650] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.44.70/26] handle="k8s-pod-network.f7df422889549a38e3a77aa40d7bf2af4be070fe6f91662211c411a2ed3b12ce" host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:45.083715 env[1589]: 2025-09-13 01:35:45.008 [INFO][4650] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:35:45.083715 env[1589]: 2025-09-13 01:35:45.008 [INFO][4650] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.44.70/26] IPv6=[] ContainerID="f7df422889549a38e3a77aa40d7bf2af4be070fe6f91662211c411a2ed3b12ce" HandleID="k8s-pod-network.f7df422889549a38e3a77aa40d7bf2af4be070fe6f91662211c411a2ed3b12ce" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-csi--node--driver--vpflq-eth0" Sep 13 01:35:45.084279 env[1589]: 2025-09-13 01:35:45.013 [INFO][4635] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f7df422889549a38e3a77aa40d7bf2af4be070fe6f91662211c411a2ed3b12ce" Namespace="calico-system" Pod="csi-node-driver-vpflq" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-csi--node--driver--vpflq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--9d226ffbbf-k8s-csi--node--driver--vpflq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"77870db6-b52e-4395-a518-9c1b7d66eb0e", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 1, 35, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-9d226ffbbf", ContainerID:"", Pod:"csi-node-driver-vpflq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.44.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid79375b0230", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 01:35:45.084279 env[1589]: 2025-09-13 01:35:45.013 [INFO][4635] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.44.70/32] ContainerID="f7df422889549a38e3a77aa40d7bf2af4be070fe6f91662211c411a2ed3b12ce" Namespace="calico-system" Pod="csi-node-driver-vpflq" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-csi--node--driver--vpflq-eth0" Sep 13 01:35:45.084279 env[1589]: 2025-09-13 01:35:45.013 [INFO][4635] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid79375b0230 ContainerID="f7df422889549a38e3a77aa40d7bf2af4be070fe6f91662211c411a2ed3b12ce" Namespace="calico-system" Pod="csi-node-driver-vpflq" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-csi--node--driver--vpflq-eth0" Sep 13 01:35:45.084279 env[1589]: 2025-09-13 01:35:45.060 [INFO][4635] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f7df422889549a38e3a77aa40d7bf2af4be070fe6f91662211c411a2ed3b12ce" Namespace="calico-system" Pod="csi-node-driver-vpflq" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-csi--node--driver--vpflq-eth0" Sep 13 01:35:45.084279 env[1589]: 2025-09-13 01:35:45.060 [INFO][4635] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f7df422889549a38e3a77aa40d7bf2af4be070fe6f91662211c411a2ed3b12ce" Namespace="calico-system" Pod="csi-node-driver-vpflq" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-csi--node--driver--vpflq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--9d226ffbbf-k8s-csi--node--driver--vpflq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"77870db6-b52e-4395-a518-9c1b7d66eb0e", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 1, 35, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-9d226ffbbf", ContainerID:"f7df422889549a38e3a77aa40d7bf2af4be070fe6f91662211c411a2ed3b12ce", Pod:"csi-node-driver-vpflq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.44.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid79375b0230", MAC:"3a:cd:06:10:9b:41", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 01:35:45.084279 env[1589]: 2025-09-13 01:35:45.082 [INFO][4635] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f7df422889549a38e3a77aa40d7bf2af4be070fe6f91662211c411a2ed3b12ce" Namespace="calico-system" Pod="csi-node-driver-vpflq" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-csi--node--driver--vpflq-eth0" Sep 13 01:35:45.100618 env[1589]: time="2025-09-13T01:35:45.100550492Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:35:45.100817 env[1589]: time="2025-09-13T01:35:45.100793572Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:35:45.100910 env[1589]: time="2025-09-13T01:35:45.100891052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:35:45.101125 env[1589]: time="2025-09-13T01:35:45.101097892Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f7df422889549a38e3a77aa40d7bf2af4be070fe6f91662211c411a2ed3b12ce pid=4714 runtime=io.containerd.runc.v2 Sep 13 01:35:45.115000 audit[4728]: NETFILTER_CFG table=filter:118 family=2 entries=17 op=nft_register_rule pid=4728 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:35:45.115000 audit[4728]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffd1292440 a2=0 a3=1 items=0 ppid=2833 pid=4728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:45.115000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:35:45.128000 audit[4728]: NETFILTER_CFG table=nat:119 family=2 entries=35 op=nft_register_chain pid=4728 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:35:45.128000 audit[4728]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffd1292440 a2=0 a3=1 items=0 ppid=2833 pid=4728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:45.128000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:35:45.143489 env[1589]: time="2025-09-13T01:35:45.143439016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-2qlzv,Uid:e1b67e48-bb3b-4ff4-9ed0-8953d7a15f03,Namespace:calico-system,Attempt:1,} returns sandbox id \"0c577c4edccda0970c1c93abe14e13706145b9107ba821e85dc669b14621bbdf\"" Sep 13 01:35:45.155000 audit[4750]: NETFILTER_CFG table=filter:120 family=2 entries=48 op=nft_register_chain pid=4750 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 01:35:45.155000 audit[4750]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=23124 a0=3 a1=ffffd6787e00 a2=0 a3=ffff807f0fa8 items=0 ppid=4047 pid=4750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:45.155000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 01:35:45.166215 env[1589]: time="2025-09-13T01:35:45.165319478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vpflq,Uid:77870db6-b52e-4395-a518-9c1b7d66eb0e,Namespace:calico-system,Attempt:1,} returns sandbox id \"f7df422889549a38e3a77aa40d7bf2af4be070fe6f91662211c411a2ed3b12ce\"" Sep 13 01:35:45.415503 systemd-networkd[1771]: cali51f5d94bb4b: Gained IPv6LL Sep 13 01:35:45.607431 env[1589]: time="2025-09-13T01:35:45.606143627Z" level=info msg="StopPodSandbox for \"7a8a6757ecb67bf01ea0b9f30ef8745b147d91eb7c504386679114bae63fa6c4\"" Sep 13 01:35:45.607848 env[1589]: time="2025-09-13T01:35:45.607770626Z" level=info msg="StopPodSandbox for \"74e77a1cea79225691f9048a3084a72482df9836e108942842c2fec9c46b29b9\"" Sep 13 01:35:45.769632 env[1589]: 2025-09-13 01:35:45.691 [INFO][4779] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="74e77a1cea79225691f9048a3084a72482df9836e108942842c2fec9c46b29b9" Sep 13 01:35:45.769632 env[1589]: 2025-09-13 01:35:45.691 [INFO][4779] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="74e77a1cea79225691f9048a3084a72482df9836e108942842c2fec9c46b29b9" iface="eth0" netns="/var/run/netns/cni-cb0dd1f8-e322-83e7-81c4-845d8737ea89" Sep 13 01:35:45.769632 env[1589]: 2025-09-13 01:35:45.691 [INFO][4779] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="74e77a1cea79225691f9048a3084a72482df9836e108942842c2fec9c46b29b9" iface="eth0" netns="/var/run/netns/cni-cb0dd1f8-e322-83e7-81c4-845d8737ea89" Sep 13 01:35:45.769632 env[1589]: 2025-09-13 01:35:45.692 [INFO][4779] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="74e77a1cea79225691f9048a3084a72482df9836e108942842c2fec9c46b29b9" iface="eth0" netns="/var/run/netns/cni-cb0dd1f8-e322-83e7-81c4-845d8737ea89" Sep 13 01:35:45.769632 env[1589]: 2025-09-13 01:35:45.692 [INFO][4779] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="74e77a1cea79225691f9048a3084a72482df9836e108942842c2fec9c46b29b9" Sep 13 01:35:45.769632 env[1589]: 2025-09-13 01:35:45.692 [INFO][4779] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="74e77a1cea79225691f9048a3084a72482df9836e108942842c2fec9c46b29b9" Sep 13 01:35:45.769632 env[1589]: 2025-09-13 01:35:45.739 [INFO][4796] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="74e77a1cea79225691f9048a3084a72482df9836e108942842c2fec9c46b29b9" HandleID="k8s-pod-network.74e77a1cea79225691f9048a3084a72482df9836e108942842c2fec9c46b29b9" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--cb6df9659--4gj4b-eth0" Sep 13 01:35:45.769632 env[1589]: 2025-09-13 01:35:45.739 [INFO][4796] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:35:45.769632 env[1589]: 2025-09-13 01:35:45.739 [INFO][4796] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:35:45.769632 env[1589]: 2025-09-13 01:35:45.756 [WARNING][4796] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="74e77a1cea79225691f9048a3084a72482df9836e108942842c2fec9c46b29b9" HandleID="k8s-pod-network.74e77a1cea79225691f9048a3084a72482df9836e108942842c2fec9c46b29b9" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--cb6df9659--4gj4b-eth0" Sep 13 01:35:45.769632 env[1589]: 2025-09-13 01:35:45.756 [INFO][4796] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="74e77a1cea79225691f9048a3084a72482df9836e108942842c2fec9c46b29b9" HandleID="k8s-pod-network.74e77a1cea79225691f9048a3084a72482df9836e108942842c2fec9c46b29b9" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--cb6df9659--4gj4b-eth0" Sep 13 01:35:45.769632 env[1589]: 2025-09-13 01:35:45.757 [INFO][4796] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:35:45.769632 env[1589]: 2025-09-13 01:35:45.764 [INFO][4779] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="74e77a1cea79225691f9048a3084a72482df9836e108942842c2fec9c46b29b9" Sep 13 01:35:45.775144 systemd[1]: run-containerd-runc-k8s.io-0c577c4edccda0970c1c93abe14e13706145b9107ba821e85dc669b14621bbdf-runc.mD8OVy.mount: Deactivated successfully. Sep 13 01:35:45.786663 systemd[1]: run-netns-cni\x2dcb0dd1f8\x2de322\x2d83e7\x2d81c4\x2d845d8737ea89.mount: Deactivated successfully. Sep 13 01:35:45.788406 env[1589]: time="2025-09-13T01:35:45.788362874Z" level=info msg="TearDown network for sandbox \"74e77a1cea79225691f9048a3084a72482df9836e108942842c2fec9c46b29b9\" successfully" Sep 13 01:35:45.788570 env[1589]: time="2025-09-13T01:35:45.788514154Z" level=info msg="StopPodSandbox for \"74e77a1cea79225691f9048a3084a72482df9836e108942842c2fec9c46b29b9\" returns successfully" Sep 13 01:35:45.789504 env[1589]: time="2025-09-13T01:35:45.789475833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-cb6df9659-4gj4b,Uid:3671d0d8-9f53-4f15-828c-de3c99528bf3,Namespace:calico-apiserver,Attempt:1,}" Sep 13 01:35:45.790123 env[1589]: 2025-09-13 01:35:45.668 [INFO][4778] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7a8a6757ecb67bf01ea0b9f30ef8745b147d91eb7c504386679114bae63fa6c4" Sep 13 01:35:45.790123 env[1589]: 2025-09-13 01:35:45.668 [INFO][4778] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7a8a6757ecb67bf01ea0b9f30ef8745b147d91eb7c504386679114bae63fa6c4" iface="eth0" netns="/var/run/netns/cni-5e38cae0-99d8-63c4-45bb-24cf81afcc68" Sep 13 01:35:45.790123 env[1589]: 2025-09-13 01:35:45.668 [INFO][4778] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7a8a6757ecb67bf01ea0b9f30ef8745b147d91eb7c504386679114bae63fa6c4" iface="eth0" netns="/var/run/netns/cni-5e38cae0-99d8-63c4-45bb-24cf81afcc68" Sep 13 01:35:45.790123 env[1589]: 2025-09-13 01:35:45.672 [INFO][4778] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7a8a6757ecb67bf01ea0b9f30ef8745b147d91eb7c504386679114bae63fa6c4" iface="eth0" netns="/var/run/netns/cni-5e38cae0-99d8-63c4-45bb-24cf81afcc68" Sep 13 01:35:45.790123 env[1589]: 2025-09-13 01:35:45.672 [INFO][4778] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7a8a6757ecb67bf01ea0b9f30ef8745b147d91eb7c504386679114bae63fa6c4" Sep 13 01:35:45.790123 env[1589]: 2025-09-13 01:35:45.672 [INFO][4778] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7a8a6757ecb67bf01ea0b9f30ef8745b147d91eb7c504386679114bae63fa6c4" Sep 13 01:35:45.790123 env[1589]: 2025-09-13 01:35:45.739 [INFO][4791] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7a8a6757ecb67bf01ea0b9f30ef8745b147d91eb7c504386679114bae63fa6c4" HandleID="k8s-pod-network.7a8a6757ecb67bf01ea0b9f30ef8745b147d91eb7c504386679114bae63fa6c4" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-coredns--7c65d6cfc9--jxjcc-eth0" Sep 13 01:35:45.790123 env[1589]: 2025-09-13 01:35:45.739 [INFO][4791] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:35:45.790123 env[1589]: 2025-09-13 01:35:45.757 [INFO][4791] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:35:45.790123 env[1589]: 2025-09-13 01:35:45.766 [WARNING][4791] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7a8a6757ecb67bf01ea0b9f30ef8745b147d91eb7c504386679114bae63fa6c4" HandleID="k8s-pod-network.7a8a6757ecb67bf01ea0b9f30ef8745b147d91eb7c504386679114bae63fa6c4" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-coredns--7c65d6cfc9--jxjcc-eth0" Sep 13 01:35:45.790123 env[1589]: 2025-09-13 01:35:45.766 [INFO][4791] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7a8a6757ecb67bf01ea0b9f30ef8745b147d91eb7c504386679114bae63fa6c4" HandleID="k8s-pod-network.7a8a6757ecb67bf01ea0b9f30ef8745b147d91eb7c504386679114bae63fa6c4" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-coredns--7c65d6cfc9--jxjcc-eth0" Sep 13 01:35:45.790123 env[1589]: 2025-09-13 01:35:45.769 [INFO][4791] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:35:45.790123 env[1589]: 2025-09-13 01:35:45.788 [INFO][4778] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7a8a6757ecb67bf01ea0b9f30ef8745b147d91eb7c504386679114bae63fa6c4" Sep 13 01:35:45.793720 env[1589]: time="2025-09-13T01:35:45.793686030Z" level=info msg="TearDown network for sandbox \"7a8a6757ecb67bf01ea0b9f30ef8745b147d91eb7c504386679114bae63fa6c4\" successfully" Sep 13 01:35:45.793887 env[1589]: time="2025-09-13T01:35:45.793866149Z" level=info msg="StopPodSandbox for \"7a8a6757ecb67bf01ea0b9f30ef8745b147d91eb7c504386679114bae63fa6c4\" returns successfully" Sep 13 01:35:45.796110 systemd[1]: run-netns-cni\x2d5e38cae0\x2d99d8\x2d63c4\x2d45bb\x2d24cf81afcc68.mount: Deactivated successfully. Sep 13 01:35:45.797809 env[1589]: time="2025-09-13T01:35:45.797777906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-jxjcc,Uid:4867af55-bd3a-4712-8e0f-bf048a45159f,Namespace:kube-system,Attempt:1,}" Sep 13 01:35:45.863835 systemd-networkd[1771]: calia8b434c0622: Gained IPv6LL Sep 13 01:35:45.990351 systemd-networkd[1771]: calib51311b43a6: Gained IPv6LL Sep 13 01:35:46.107885 systemd-networkd[1771]: calic10603921ef: Link UP Sep 13 01:35:46.124250 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 01:35:46.124471 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calic10603921ef: link becomes ready Sep 13 01:35:46.125238 systemd-networkd[1771]: calic10603921ef: Gained carrier Sep 13 01:35:46.126237 systemd-networkd[1771]: cali6fc3f020bef: Gained IPv6LL Sep 13 01:35:46.143293 env[1589]: 2025-09-13 01:35:45.977 [INFO][4804] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--cb6df9659--4gj4b-eth0 calico-apiserver-cb6df9659- calico-apiserver 3671d0d8-9f53-4f15-828c-de3c99528bf3 968 0 2025-09-13 01:35:18 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:cb6df9659 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510.3.8-n-9d226ffbbf calico-apiserver-cb6df9659-4gj4b eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic10603921ef [] [] }} ContainerID="e7111adde459b0df00cc9f102b708766304dde3411dd7cb351da884b157ee1b0" Namespace="calico-apiserver" Pod="calico-apiserver-cb6df9659-4gj4b" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--cb6df9659--4gj4b-" Sep 13 01:35:46.143293 env[1589]: 2025-09-13 01:35:45.977 [INFO][4804] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e7111adde459b0df00cc9f102b708766304dde3411dd7cb351da884b157ee1b0" Namespace="calico-apiserver" Pod="calico-apiserver-cb6df9659-4gj4b" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--cb6df9659--4gj4b-eth0" Sep 13 01:35:46.143293 env[1589]: 2025-09-13 01:35:46.026 [INFO][4832] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e7111adde459b0df00cc9f102b708766304dde3411dd7cb351da884b157ee1b0" HandleID="k8s-pod-network.e7111adde459b0df00cc9f102b708766304dde3411dd7cb351da884b157ee1b0" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--cb6df9659--4gj4b-eth0" Sep 13 01:35:46.143293 env[1589]: 2025-09-13 01:35:46.028 [INFO][4832] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e7111adde459b0df00cc9f102b708766304dde3411dd7cb351da884b157ee1b0" HandleID="k8s-pod-network.e7111adde459b0df00cc9f102b708766304dde3411dd7cb351da884b157ee1b0" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--cb6df9659--4gj4b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002caff0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510.3.8-n-9d226ffbbf", "pod":"calico-apiserver-cb6df9659-4gj4b", "timestamp":"2025-09-13 01:35:46.024227076 +0000 UTC"}, Hostname:"ci-3510.3.8-n-9d226ffbbf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 01:35:46.143293 env[1589]: 2025-09-13 01:35:46.028 [INFO][4832] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:35:46.143293 env[1589]: 2025-09-13 01:35:46.028 [INFO][4832] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:35:46.143293 env[1589]: 2025-09-13 01:35:46.028 [INFO][4832] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-9d226ffbbf' Sep 13 01:35:46.143293 env[1589]: 2025-09-13 01:35:46.038 [INFO][4832] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e7111adde459b0df00cc9f102b708766304dde3411dd7cb351da884b157ee1b0" host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:46.143293 env[1589]: 2025-09-13 01:35:46.042 [INFO][4832] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:46.143293 env[1589]: 2025-09-13 01:35:46.045 [INFO][4832] ipam/ipam.go 511: Trying affinity for 192.168.44.64/26 host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:46.143293 env[1589]: 2025-09-13 01:35:46.047 [INFO][4832] ipam/ipam.go 158: Attempting to load block cidr=192.168.44.64/26 host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:46.143293 env[1589]: 2025-09-13 01:35:46.054 [INFO][4832] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.44.64/26 host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:46.143293 env[1589]: 2025-09-13 01:35:46.054 [INFO][4832] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.44.64/26 handle="k8s-pod-network.e7111adde459b0df00cc9f102b708766304dde3411dd7cb351da884b157ee1b0" host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:46.143293 env[1589]: 2025-09-13 01:35:46.055 [INFO][4832] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e7111adde459b0df00cc9f102b708766304dde3411dd7cb351da884b157ee1b0 Sep 13 01:35:46.143293 env[1589]: 2025-09-13 01:35:46.078 [INFO][4832] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.44.64/26 handle="k8s-pod-network.e7111adde459b0df00cc9f102b708766304dde3411dd7cb351da884b157ee1b0" host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:46.143293 env[1589]: 2025-09-13 01:35:46.090 [INFO][4832] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.44.71/26] block=192.168.44.64/26 handle="k8s-pod-network.e7111adde459b0df00cc9f102b708766304dde3411dd7cb351da884b157ee1b0" host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:46.143293 env[1589]: 2025-09-13 01:35:46.090 [INFO][4832] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.44.71/26] handle="k8s-pod-network.e7111adde459b0df00cc9f102b708766304dde3411dd7cb351da884b157ee1b0" host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:46.143293 env[1589]: 2025-09-13 01:35:46.090 [INFO][4832] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:35:46.143293 env[1589]: 2025-09-13 01:35:46.090 [INFO][4832] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.44.71/26] IPv6=[] ContainerID="e7111adde459b0df00cc9f102b708766304dde3411dd7cb351da884b157ee1b0" HandleID="k8s-pod-network.e7111adde459b0df00cc9f102b708766304dde3411dd7cb351da884b157ee1b0" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--cb6df9659--4gj4b-eth0" Sep 13 01:35:46.146784 env[1589]: 2025-09-13 01:35:46.097 [INFO][4804] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e7111adde459b0df00cc9f102b708766304dde3411dd7cb351da884b157ee1b0" Namespace="calico-apiserver" Pod="calico-apiserver-cb6df9659-4gj4b" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--cb6df9659--4gj4b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--cb6df9659--4gj4b-eth0", GenerateName:"calico-apiserver-cb6df9659-", Namespace:"calico-apiserver", SelfLink:"", UID:"3671d0d8-9f53-4f15-828c-de3c99528bf3", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 1, 35, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"cb6df9659", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-9d226ffbbf", ContainerID:"", Pod:"calico-apiserver-cb6df9659-4gj4b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.44.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic10603921ef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 01:35:46.146784 env[1589]: 2025-09-13 01:35:46.097 [INFO][4804] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.44.71/32] ContainerID="e7111adde459b0df00cc9f102b708766304dde3411dd7cb351da884b157ee1b0" Namespace="calico-apiserver" Pod="calico-apiserver-cb6df9659-4gj4b" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--cb6df9659--4gj4b-eth0" Sep 13 01:35:46.146784 env[1589]: 2025-09-13 01:35:46.097 [INFO][4804] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic10603921ef ContainerID="e7111adde459b0df00cc9f102b708766304dde3411dd7cb351da884b157ee1b0" Namespace="calico-apiserver" Pod="calico-apiserver-cb6df9659-4gj4b" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--cb6df9659--4gj4b-eth0" Sep 13 01:35:46.146784 env[1589]: 2025-09-13 01:35:46.125 [INFO][4804] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e7111adde459b0df00cc9f102b708766304dde3411dd7cb351da884b157ee1b0" Namespace="calico-apiserver" Pod="calico-apiserver-cb6df9659-4gj4b" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--cb6df9659--4gj4b-eth0" Sep 13 01:35:46.146784 env[1589]: 2025-09-13 01:35:46.127 [INFO][4804] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e7111adde459b0df00cc9f102b708766304dde3411dd7cb351da884b157ee1b0" Namespace="calico-apiserver" Pod="calico-apiserver-cb6df9659-4gj4b" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--cb6df9659--4gj4b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--cb6df9659--4gj4b-eth0", GenerateName:"calico-apiserver-cb6df9659-", Namespace:"calico-apiserver", SelfLink:"", UID:"3671d0d8-9f53-4f15-828c-de3c99528bf3", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 1, 35, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"cb6df9659", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-9d226ffbbf", ContainerID:"e7111adde459b0df00cc9f102b708766304dde3411dd7cb351da884b157ee1b0", Pod:"calico-apiserver-cb6df9659-4gj4b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.44.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic10603921ef", MAC:"52:ba:a4:98:0b:e4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 01:35:46.146784 env[1589]: 2025-09-13 01:35:46.140 [INFO][4804] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e7111adde459b0df00cc9f102b708766304dde3411dd7cb351da884b157ee1b0" Namespace="calico-apiserver" Pod="calico-apiserver-cb6df9659-4gj4b" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--cb6df9659--4gj4b-eth0" Sep 13 01:35:46.177000 audit[4853]: NETFILTER_CFG table=filter:121 family=2 entries=53 op=nft_register_chain pid=4853 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 01:35:46.185307 kernel: kauditd_printk_skb: 580 callbacks suppressed Sep 13 01:35:46.185484 kernel: audit: type=1325 audit(1757727346.177:429): table=filter:121 family=2 entries=53 op=nft_register_chain pid=4853 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 01:35:46.200256 env[1589]: time="2025-09-13T01:35:46.192589737Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:35:46.200256 env[1589]: time="2025-09-13T01:35:46.192629337Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:35:46.200256 env[1589]: time="2025-09-13T01:35:46.192638977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:35:46.200256 env[1589]: time="2025-09-13T01:35:46.192826777Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e7111adde459b0df00cc9f102b708766304dde3411dd7cb351da884b157ee1b0 pid=4861 runtime=io.containerd.runc.v2 Sep 13 01:35:46.177000 audit[4853]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=26624 a0=3 a1=fffff70e1fc0 a2=0 a3=ffff968b2fa8 items=0 ppid=4047 pid=4853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:46.226873 kernel: audit: type=1300 audit(1757727346.177:429): arch=c00000b7 syscall=211 success=yes exit=26624 a0=3 a1=fffff70e1fc0 a2=0 a3=ffff968b2fa8 items=0 ppid=4047 pid=4853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:46.177000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 01:35:46.242673 kernel: audit: type=1327 audit(1757727346.177:429): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 01:35:46.254421 systemd-networkd[1771]: calid929f1b0d49: Link UP Sep 13 01:35:46.267204 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calid929f1b0d49: link becomes ready Sep 13 01:35:46.269076 systemd-networkd[1771]: calid929f1b0d49: Gained carrier Sep 13 01:35:46.295688 env[1589]: 2025-09-13 01:35:45.999 [INFO][4808] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--9d226ffbbf-k8s-coredns--7c65d6cfc9--jxjcc-eth0 coredns-7c65d6cfc9- kube-system 4867af55-bd3a-4712-8e0f-bf048a45159f 967 0 2025-09-13 01:35:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510.3.8-n-9d226ffbbf coredns-7c65d6cfc9-jxjcc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid929f1b0d49 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="27d821605e8dc1712d276ce9a87cb3cba3eb4e1af0a47cb231ed28b82983c1e9" Namespace="kube-system" Pod="coredns-7c65d6cfc9-jxjcc" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-coredns--7c65d6cfc9--jxjcc-" Sep 13 01:35:46.295688 env[1589]: 2025-09-13 01:35:45.999 [INFO][4808] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="27d821605e8dc1712d276ce9a87cb3cba3eb4e1af0a47cb231ed28b82983c1e9" Namespace="kube-system" Pod="coredns-7c65d6cfc9-jxjcc" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-coredns--7c65d6cfc9--jxjcc-eth0" Sep 13 01:35:46.295688 env[1589]: 2025-09-13 01:35:46.057 [INFO][4838] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="27d821605e8dc1712d276ce9a87cb3cba3eb4e1af0a47cb231ed28b82983c1e9" HandleID="k8s-pod-network.27d821605e8dc1712d276ce9a87cb3cba3eb4e1af0a47cb231ed28b82983c1e9" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-coredns--7c65d6cfc9--jxjcc-eth0" Sep 13 01:35:46.295688 env[1589]: 2025-09-13 01:35:46.057 [INFO][4838] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="27d821605e8dc1712d276ce9a87cb3cba3eb4e1af0a47cb231ed28b82983c1e9" HandleID="k8s-pod-network.27d821605e8dc1712d276ce9a87cb3cba3eb4e1af0a47cb231ed28b82983c1e9" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-coredns--7c65d6cfc9--jxjcc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c1600), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510.3.8-n-9d226ffbbf", "pod":"coredns-7c65d6cfc9-jxjcc", "timestamp":"2025-09-13 01:35:46.057836608 +0000 UTC"}, Hostname:"ci-3510.3.8-n-9d226ffbbf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 01:35:46.295688 env[1589]: 2025-09-13 01:35:46.058 [INFO][4838] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:35:46.295688 env[1589]: 2025-09-13 01:35:46.090 [INFO][4838] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:35:46.295688 env[1589]: 2025-09-13 01:35:46.090 [INFO][4838] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-9d226ffbbf' Sep 13 01:35:46.295688 env[1589]: 2025-09-13 01:35:46.143 [INFO][4838] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.27d821605e8dc1712d276ce9a87cb3cba3eb4e1af0a47cb231ed28b82983c1e9" host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:46.295688 env[1589]: 2025-09-13 01:35:46.151 [INFO][4838] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:46.295688 env[1589]: 2025-09-13 01:35:46.155 [INFO][4838] ipam/ipam.go 511: Trying affinity for 192.168.44.64/26 host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:46.295688 env[1589]: 2025-09-13 01:35:46.158 [INFO][4838] ipam/ipam.go 158: Attempting to load block cidr=192.168.44.64/26 host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:46.295688 env[1589]: 2025-09-13 01:35:46.160 [INFO][4838] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.44.64/26 host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:46.295688 env[1589]: 2025-09-13 01:35:46.160 [INFO][4838] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.44.64/26 handle="k8s-pod-network.27d821605e8dc1712d276ce9a87cb3cba3eb4e1af0a47cb231ed28b82983c1e9" host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:46.295688 env[1589]: 2025-09-13 01:35:46.162 [INFO][4838] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.27d821605e8dc1712d276ce9a87cb3cba3eb4e1af0a47cb231ed28b82983c1e9 Sep 13 01:35:46.295688 env[1589]: 2025-09-13 01:35:46.170 [INFO][4838] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.44.64/26 handle="k8s-pod-network.27d821605e8dc1712d276ce9a87cb3cba3eb4e1af0a47cb231ed28b82983c1e9" host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:46.295688 env[1589]: 2025-09-13 01:35:46.199 [INFO][4838] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.44.72/26] block=192.168.44.64/26 handle="k8s-pod-network.27d821605e8dc1712d276ce9a87cb3cba3eb4e1af0a47cb231ed28b82983c1e9" host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:46.295688 env[1589]: 2025-09-13 01:35:46.199 [INFO][4838] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.44.72/26] handle="k8s-pod-network.27d821605e8dc1712d276ce9a87cb3cba3eb4e1af0a47cb231ed28b82983c1e9" host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:46.295688 env[1589]: 2025-09-13 01:35:46.199 [INFO][4838] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:35:46.295688 env[1589]: 2025-09-13 01:35:46.199 [INFO][4838] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.44.72/26] IPv6=[] ContainerID="27d821605e8dc1712d276ce9a87cb3cba3eb4e1af0a47cb231ed28b82983c1e9" HandleID="k8s-pod-network.27d821605e8dc1712d276ce9a87cb3cba3eb4e1af0a47cb231ed28b82983c1e9" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-coredns--7c65d6cfc9--jxjcc-eth0" Sep 13 01:35:46.296285 env[1589]: 2025-09-13 01:35:46.245 [INFO][4808] cni-plugin/k8s.go 418: Populated endpoint ContainerID="27d821605e8dc1712d276ce9a87cb3cba3eb4e1af0a47cb231ed28b82983c1e9" Namespace="kube-system" Pod="coredns-7c65d6cfc9-jxjcc" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-coredns--7c65d6cfc9--jxjcc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--9d226ffbbf-k8s-coredns--7c65d6cfc9--jxjcc-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"4867af55-bd3a-4712-8e0f-bf048a45159f", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 1, 35, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-9d226ffbbf", ContainerID:"", Pod:"coredns-7c65d6cfc9-jxjcc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid929f1b0d49", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 01:35:46.296285 env[1589]: 2025-09-13 01:35:46.245 [INFO][4808] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.44.72/32] ContainerID="27d821605e8dc1712d276ce9a87cb3cba3eb4e1af0a47cb231ed28b82983c1e9" Namespace="kube-system" Pod="coredns-7c65d6cfc9-jxjcc" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-coredns--7c65d6cfc9--jxjcc-eth0" Sep 13 01:35:46.296285 env[1589]: 2025-09-13 01:35:46.245 [INFO][4808] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid929f1b0d49 ContainerID="27d821605e8dc1712d276ce9a87cb3cba3eb4e1af0a47cb231ed28b82983c1e9" Namespace="kube-system" Pod="coredns-7c65d6cfc9-jxjcc" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-coredns--7c65d6cfc9--jxjcc-eth0" Sep 13 01:35:46.296285 env[1589]: 2025-09-13 01:35:46.255 [INFO][4808] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="27d821605e8dc1712d276ce9a87cb3cba3eb4e1af0a47cb231ed28b82983c1e9" Namespace="kube-system" Pod="coredns-7c65d6cfc9-jxjcc" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-coredns--7c65d6cfc9--jxjcc-eth0" Sep 13 01:35:46.296285 env[1589]: 2025-09-13 01:35:46.268 [INFO][4808] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="27d821605e8dc1712d276ce9a87cb3cba3eb4e1af0a47cb231ed28b82983c1e9" Namespace="kube-system" Pod="coredns-7c65d6cfc9-jxjcc" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-coredns--7c65d6cfc9--jxjcc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--9d226ffbbf-k8s-coredns--7c65d6cfc9--jxjcc-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"4867af55-bd3a-4712-8e0f-bf048a45159f", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 1, 35, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-9d226ffbbf", ContainerID:"27d821605e8dc1712d276ce9a87cb3cba3eb4e1af0a47cb231ed28b82983c1e9", Pod:"coredns-7c65d6cfc9-jxjcc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid929f1b0d49", MAC:"36:94:e0:c1:45:00", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 01:35:46.296285 env[1589]: 2025-09-13 01:35:46.289 [INFO][4808] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="27d821605e8dc1712d276ce9a87cb3cba3eb4e1af0a47cb231ed28b82983c1e9" Namespace="kube-system" Pod="coredns-7c65d6cfc9-jxjcc" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-coredns--7c65d6cfc9--jxjcc-eth0" Sep 13 01:35:46.319000 audit[4902]: NETFILTER_CFG table=filter:122 family=2 entries=58 op=nft_register_chain pid=4902 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 01:35:46.321497 env[1589]: time="2025-09-13T01:35:46.318431273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-cb6df9659-4gj4b,Uid:3671d0d8-9f53-4f15-828c-de3c99528bf3,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"e7111adde459b0df00cc9f102b708766304dde3411dd7cb351da884b157ee1b0\"" Sep 13 01:35:46.319000 audit[4902]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=26744 a0=3 a1=ffffd4e84940 a2=0 a3=ffffb9b8bfa8 items=0 ppid=4047 pid=4902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:46.365634 kernel: audit: type=1325 audit(1757727346.319:430): table=filter:122 family=2 entries=58 op=nft_register_chain pid=4902 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 01:35:46.365760 kernel: audit: type=1300 audit(1757727346.319:430): arch=c00000b7 syscall=211 success=yes exit=26744 a0=3 a1=ffffd4e84940 a2=0 a3=ffffb9b8bfa8 items=0 ppid=4047 pid=4902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:46.319000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 01:35:46.383345 kernel: audit: type=1327 audit(1757727346.319:430): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 01:35:46.383718 systemd-networkd[1771]: calid79375b0230: Gained IPv6LL Sep 13 01:35:46.388637 env[1589]: time="2025-09-13T01:35:46.388554895Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:35:46.388637 env[1589]: time="2025-09-13T01:35:46.388601815Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:35:46.388637 env[1589]: time="2025-09-13T01:35:46.388611855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:35:46.389043 env[1589]: time="2025-09-13T01:35:46.388984214Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/27d821605e8dc1712d276ce9a87cb3cba3eb4e1af0a47cb231ed28b82983c1e9 pid=4910 runtime=io.containerd.runc.v2 Sep 13 01:35:46.450302 env[1589]: time="2025-09-13T01:35:46.450254164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-jxjcc,Uid:4867af55-bd3a-4712-8e0f-bf048a45159f,Namespace:kube-system,Attempt:1,} returns sandbox id \"27d821605e8dc1712d276ce9a87cb3cba3eb4e1af0a47cb231ed28b82983c1e9\"" Sep 13 01:35:46.454874 env[1589]: time="2025-09-13T01:35:46.454826760Z" level=info msg="CreateContainer within sandbox \"27d821605e8dc1712d276ce9a87cb3cba3eb4e1af0a47cb231ed28b82983c1e9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 01:35:46.508655 env[1589]: time="2025-09-13T01:35:46.508596675Z" level=info msg="CreateContainer within sandbox \"27d821605e8dc1712d276ce9a87cb3cba3eb4e1af0a47cb231ed28b82983c1e9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8cb7624fedc06eabbcf535b5cba0aba7fb6ee42318e944a366c1dcddba3be0d5\"" Sep 13 01:35:46.510951 env[1589]: time="2025-09-13T01:35:46.510922793Z" level=info msg="StartContainer for \"8cb7624fedc06eabbcf535b5cba0aba7fb6ee42318e944a366c1dcddba3be0d5\"" Sep 13 01:35:46.564870 env[1589]: time="2025-09-13T01:35:46.564822189Z" level=info msg="StartContainer for \"8cb7624fedc06eabbcf535b5cba0aba7fb6ee42318e944a366c1dcddba3be0d5\" returns successfully" Sep 13 01:35:46.837167 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2730267536.mount: Deactivated successfully. Sep 13 01:35:46.901293 env[1589]: time="2025-09-13T01:35:46.901247030Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:35:46.915069 env[1589]: time="2025-09-13T01:35:46.915029659Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:35:46.922370 env[1589]: time="2025-09-13T01:35:46.922331453Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:35:46.931049 env[1589]: time="2025-09-13T01:35:46.930999966Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:35:46.931878 env[1589]: time="2025-09-13T01:35:46.931844925Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 13 01:35:46.935710 env[1589]: time="2025-09-13T01:35:46.935672482Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 13 01:35:46.936531 env[1589]: time="2025-09-13T01:35:46.936503921Z" level=info msg="CreateContainer within sandbox \"36ec86b7bd420f1578f3d659adcfce3eb8be084c214686b0fe60c605d7a2821f\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 13 01:35:46.974837 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4089991433.mount: Deactivated successfully. Sep 13 01:35:46.997410 env[1589]: time="2025-09-13T01:35:46.997355831Z" level=info msg="CreateContainer within sandbox \"36ec86b7bd420f1578f3d659adcfce3eb8be084c214686b0fe60c605d7a2821f\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"89325d05799a8afd2a23f2119942827ba8baee5447c1b2063b9a69ad51f21303\"" Sep 13 01:35:46.999355 env[1589]: time="2025-09-13T01:35:46.998251550Z" level=info msg="StartContainer for \"89325d05799a8afd2a23f2119942827ba8baee5447c1b2063b9a69ad51f21303\"" Sep 13 01:35:47.031418 kubelet[2688]: I0913 01:35:47.028950 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-jxjcc" podStartSLOduration=43.028922925 podStartE2EDuration="43.028922925s" podCreationTimestamp="2025-09-13 01:35:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 01:35:47.026817567 +0000 UTC m=+48.527655799" watchObservedRunningTime="2025-09-13 01:35:47.028922925 +0000 UTC m=+48.529761157" Sep 13 01:35:47.066000 audit[5006]: NETFILTER_CFG table=filter:123 family=2 entries=14 op=nft_register_rule pid=5006 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:35:47.066000 audit[5006]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff1876470 a2=0 a3=1 items=0 ppid=2833 pid=5006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:47.109507 kernel: audit: type=1325 audit(1757727347.066:431): table=filter:123 family=2 entries=14 op=nft_register_rule pid=5006 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:35:47.109661 kernel: audit: type=1300 audit(1757727347.066:431): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff1876470 a2=0 a3=1 items=0 ppid=2833 pid=5006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:47.066000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:35:47.123163 kernel: audit: type=1327 audit(1757727347.066:431): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:35:47.124000 audit[5006]: NETFILTER_CFG table=nat:124 family=2 entries=44 op=nft_register_rule pid=5006 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:35:47.138658 kernel: audit: type=1325 audit(1757727347.124:432): table=nat:124 family=2 entries=44 op=nft_register_rule pid=5006 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:35:47.124000 audit[5006]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=fffff1876470 a2=0 a3=1 items=0 ppid=2833 pid=5006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:47.124000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:35:47.156000 audit[5014]: NETFILTER_CFG table=filter:125 family=2 entries=14 op=nft_register_rule pid=5014 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:35:47.158086 env[1589]: time="2025-09-13T01:35:47.158037540Z" level=info msg="StartContainer for \"89325d05799a8afd2a23f2119942827ba8baee5447c1b2063b9a69ad51f21303\" returns successfully" Sep 13 01:35:47.156000 audit[5014]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc251e160 a2=0 a3=1 items=0 ppid=2833 pid=5014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:47.156000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:35:47.182000 audit[5014]: NETFILTER_CFG table=nat:126 family=2 entries=56 op=nft_register_chain pid=5014 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:35:47.182000 audit[5014]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffc251e160 a2=0 a3=1 items=0 ppid=2833 pid=5014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:47.182000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:35:47.213876 kubelet[2688]: I0913 01:35:47.213839 2688 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 01:35:47.590409 systemd-networkd[1771]: calic10603921ef: Gained IPv6LL Sep 13 01:35:47.604438 env[1589]: time="2025-09-13T01:35:47.604394696Z" level=info msg="StopPodSandbox for \"aa87d586d2429d57663c418daf2c06b2229a62b7799ff27d3300900147b420a7\"" Sep 13 01:35:47.697427 env[1589]: 2025-09-13 01:35:47.650 [INFO][5074] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="aa87d586d2429d57663c418daf2c06b2229a62b7799ff27d3300900147b420a7" Sep 13 01:35:47.697427 env[1589]: 2025-09-13 01:35:47.650 [INFO][5074] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="aa87d586d2429d57663c418daf2c06b2229a62b7799ff27d3300900147b420a7" iface="eth0" netns="/var/run/netns/cni-48a774f7-2ff8-07b5-f1f9-58f199a512dc" Sep 13 01:35:47.697427 env[1589]: 2025-09-13 01:35:47.650 [INFO][5074] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="aa87d586d2429d57663c418daf2c06b2229a62b7799ff27d3300900147b420a7" iface="eth0" netns="/var/run/netns/cni-48a774f7-2ff8-07b5-f1f9-58f199a512dc" Sep 13 01:35:47.697427 env[1589]: 2025-09-13 01:35:47.650 [INFO][5074] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="aa87d586d2429d57663c418daf2c06b2229a62b7799ff27d3300900147b420a7" iface="eth0" netns="/var/run/netns/cni-48a774f7-2ff8-07b5-f1f9-58f199a512dc" Sep 13 01:35:47.697427 env[1589]: 2025-09-13 01:35:47.650 [INFO][5074] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="aa87d586d2429d57663c418daf2c06b2229a62b7799ff27d3300900147b420a7" Sep 13 01:35:47.697427 env[1589]: 2025-09-13 01:35:47.650 [INFO][5074] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aa87d586d2429d57663c418daf2c06b2229a62b7799ff27d3300900147b420a7" Sep 13 01:35:47.697427 env[1589]: 2025-09-13 01:35:47.683 [INFO][5081] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aa87d586d2429d57663c418daf2c06b2229a62b7799ff27d3300900147b420a7" HandleID="k8s-pod-network.aa87d586d2429d57663c418daf2c06b2229a62b7799ff27d3300900147b420a7" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dqp2g-eth0" Sep 13 01:35:47.697427 env[1589]: 2025-09-13 01:35:47.683 [INFO][5081] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:35:47.697427 env[1589]: 2025-09-13 01:35:47.683 [INFO][5081] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:35:47.697427 env[1589]: 2025-09-13 01:35:47.692 [WARNING][5081] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aa87d586d2429d57663c418daf2c06b2229a62b7799ff27d3300900147b420a7" HandleID="k8s-pod-network.aa87d586d2429d57663c418daf2c06b2229a62b7799ff27d3300900147b420a7" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dqp2g-eth0" Sep 13 01:35:47.697427 env[1589]: 2025-09-13 01:35:47.692 [INFO][5081] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aa87d586d2429d57663c418daf2c06b2229a62b7799ff27d3300900147b420a7" HandleID="k8s-pod-network.aa87d586d2429d57663c418daf2c06b2229a62b7799ff27d3300900147b420a7" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dqp2g-eth0" Sep 13 01:35:47.697427 env[1589]: 2025-09-13 01:35:47.693 [INFO][5081] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:35:47.697427 env[1589]: 2025-09-13 01:35:47.695 [INFO][5074] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="aa87d586d2429d57663c418daf2c06b2229a62b7799ff27d3300900147b420a7" Sep 13 01:35:47.697937 env[1589]: time="2025-09-13T01:35:47.697574620Z" level=info msg="TearDown network for sandbox \"aa87d586d2429d57663c418daf2c06b2229a62b7799ff27d3300900147b420a7\" successfully" Sep 13 01:35:47.697937 env[1589]: time="2025-09-13T01:35:47.697613380Z" level=info msg="StopPodSandbox for \"aa87d586d2429d57663c418daf2c06b2229a62b7799ff27d3300900147b420a7\" returns successfully" Sep 13 01:35:47.698718 env[1589]: time="2025-09-13T01:35:47.698687539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-948d44db6-dqp2g,Uid:7bfd1f1e-041a-4f50-ba7b-87c117566d38,Namespace:calico-apiserver,Attempt:1,}" Sep 13 01:35:47.775271 systemd[1]: run-netns-cni\x2d48a774f7\x2d2ff8\x2d07b5\x2df1f9\x2d58f199a512dc.mount: Deactivated successfully. Sep 13 01:35:47.868833 systemd-networkd[1771]: calia93641acd7b: Link UP Sep 13 01:35:47.881915 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 01:35:47.882042 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calia93641acd7b: link becomes ready Sep 13 01:35:47.884536 systemd-networkd[1771]: calia93641acd7b: Gained carrier Sep 13 01:35:47.906476 env[1589]: 2025-09-13 01:35:47.791 [INFO][5088] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dqp2g-eth0 calico-apiserver-948d44db6- calico-apiserver 7bfd1f1e-041a-4f50-ba7b-87c117566d38 998 0 2025-09-13 01:35:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:948d44db6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510.3.8-n-9d226ffbbf calico-apiserver-948d44db6-dqp2g eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia93641acd7b [] [] }} ContainerID="b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7" Namespace="calico-apiserver" Pod="calico-apiserver-948d44db6-dqp2g" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dqp2g-" Sep 13 01:35:47.906476 env[1589]: 2025-09-13 01:35:47.791 [INFO][5088] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7" Namespace="calico-apiserver" Pod="calico-apiserver-948d44db6-dqp2g" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dqp2g-eth0" Sep 13 01:35:47.906476 env[1589]: 2025-09-13 01:35:47.818 [INFO][5102] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7" HandleID="k8s-pod-network.b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dqp2g-eth0" Sep 13 01:35:47.906476 env[1589]: 2025-09-13 01:35:47.818 [INFO][5102] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7" HandleID="k8s-pod-network.b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dqp2g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3850), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510.3.8-n-9d226ffbbf", "pod":"calico-apiserver-948d44db6-dqp2g", "timestamp":"2025-09-13 01:35:47.818333281 +0000 UTC"}, Hostname:"ci-3510.3.8-n-9d226ffbbf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 01:35:47.906476 env[1589]: 2025-09-13 01:35:47.818 [INFO][5102] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:35:47.906476 env[1589]: 2025-09-13 01:35:47.818 [INFO][5102] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:35:47.906476 env[1589]: 2025-09-13 01:35:47.818 [INFO][5102] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-9d226ffbbf' Sep 13 01:35:47.906476 env[1589]: 2025-09-13 01:35:47.828 [INFO][5102] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7" host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:47.906476 env[1589]: 2025-09-13 01:35:47.833 [INFO][5102] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:47.906476 env[1589]: 2025-09-13 01:35:47.836 [INFO][5102] ipam/ipam.go 511: Trying affinity for 192.168.44.64/26 host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:47.906476 env[1589]: 2025-09-13 01:35:47.838 [INFO][5102] ipam/ipam.go 158: Attempting to load block cidr=192.168.44.64/26 host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:47.906476 env[1589]: 2025-09-13 01:35:47.843 [INFO][5102] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.44.64/26 host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:47.906476 env[1589]: 2025-09-13 01:35:47.843 [INFO][5102] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.44.64/26 handle="k8s-pod-network.b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7" host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:47.906476 env[1589]: 2025-09-13 01:35:47.845 [INFO][5102] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7 Sep 13 01:35:47.906476 env[1589]: 2025-09-13 01:35:47.853 [INFO][5102] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.44.64/26 handle="k8s-pod-network.b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7" host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:47.906476 env[1589]: 2025-09-13 01:35:47.863 [INFO][5102] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.44.73/26] block=192.168.44.64/26 handle="k8s-pod-network.b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7" host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:47.906476 env[1589]: 2025-09-13 01:35:47.863 [INFO][5102] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.44.73/26] handle="k8s-pod-network.b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7" host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:35:47.906476 env[1589]: 2025-09-13 01:35:47.863 [INFO][5102] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:35:47.906476 env[1589]: 2025-09-13 01:35:47.863 [INFO][5102] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.44.73/26] IPv6=[] ContainerID="b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7" HandleID="k8s-pod-network.b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dqp2g-eth0" Sep 13 01:35:47.907470 env[1589]: 2025-09-13 01:35:47.865 [INFO][5088] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7" Namespace="calico-apiserver" Pod="calico-apiserver-948d44db6-dqp2g" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dqp2g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dqp2g-eth0", GenerateName:"calico-apiserver-948d44db6-", Namespace:"calico-apiserver", SelfLink:"", UID:"7bfd1f1e-041a-4f50-ba7b-87c117566d38", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 1, 35, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"948d44db6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-9d226ffbbf", ContainerID:"", Pod:"calico-apiserver-948d44db6-dqp2g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.44.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia93641acd7b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 01:35:47.907470 env[1589]: 2025-09-13 01:35:47.865 [INFO][5088] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.44.73/32] ContainerID="b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7" Namespace="calico-apiserver" Pod="calico-apiserver-948d44db6-dqp2g" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dqp2g-eth0" Sep 13 01:35:47.907470 env[1589]: 2025-09-13 01:35:47.865 [INFO][5088] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia93641acd7b ContainerID="b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7" Namespace="calico-apiserver" Pod="calico-apiserver-948d44db6-dqp2g" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dqp2g-eth0" Sep 13 01:35:47.907470 env[1589]: 2025-09-13 01:35:47.889 [INFO][5088] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7" Namespace="calico-apiserver" Pod="calico-apiserver-948d44db6-dqp2g" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dqp2g-eth0" Sep 13 01:35:47.907470 env[1589]: 2025-09-13 01:35:47.889 [INFO][5088] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7" Namespace="calico-apiserver" Pod="calico-apiserver-948d44db6-dqp2g" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dqp2g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dqp2g-eth0", GenerateName:"calico-apiserver-948d44db6-", Namespace:"calico-apiserver", SelfLink:"", UID:"7bfd1f1e-041a-4f50-ba7b-87c117566d38", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 1, 35, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"948d44db6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-9d226ffbbf", ContainerID:"b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7", Pod:"calico-apiserver-948d44db6-dqp2g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.44.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia93641acd7b", MAC:"a2:c9:62:72:5b:fd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 01:35:47.907470 env[1589]: 2025-09-13 01:35:47.904 [INFO][5088] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7" Namespace="calico-apiserver" Pod="calico-apiserver-948d44db6-dqp2g" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dqp2g-eth0" Sep 13 01:35:47.921000 audit[5117]: NETFILTER_CFG table=filter:127 family=2 entries=63 op=nft_register_chain pid=5117 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 01:35:47.921000 audit[5117]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=30648 a0=3 a1=ffffc09e8210 a2=0 a3=ffffaf3e1fa8 items=0 ppid=4047 pid=5117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:47.921000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 01:35:47.924316 env[1589]: time="2025-09-13T01:35:47.924104715Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:35:47.924316 env[1589]: time="2025-09-13T01:35:47.924146515Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:35:47.924316 env[1589]: time="2025-09-13T01:35:47.924156515Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:35:47.924470 env[1589]: time="2025-09-13T01:35:47.924407515Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7 pid=5127 runtime=io.containerd.runc.v2 Sep 13 01:35:47.978516 env[1589]: time="2025-09-13T01:35:47.978476711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-948d44db6-dqp2g,Uid:7bfd1f1e-041a-4f50-ba7b-87c117566d38,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7\"" Sep 13 01:35:48.100000 audit[5169]: NETFILTER_CFG table=filter:128 family=2 entries=13 op=nft_register_rule pid=5169 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:35:48.100000 audit[5169]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=ffffe4321180 a2=0 a3=1 items=0 ppid=2833 pid=5169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:48.100000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:35:48.106000 audit[5169]: NETFILTER_CFG table=nat:129 family=2 entries=27 op=nft_register_chain pid=5169 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:35:48.106000 audit[5169]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=9348 a0=3 a1=ffffe4321180 a2=0 a3=1 items=0 ppid=2833 pid=5169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:48.106000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:35:48.294857 systemd-networkd[1771]: calid929f1b0d49: Gained IPv6LL Sep 13 01:35:49.273910 env[1589]: time="2025-09-13T01:35:49.273862513Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:35:49.282003 env[1589]: time="2025-09-13T01:35:49.281974426Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:35:49.288600 env[1589]: time="2025-09-13T01:35:49.288574701Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:35:49.293798 env[1589]: time="2025-09-13T01:35:49.293774337Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:35:49.294454 env[1589]: time="2025-09-13T01:35:49.294428296Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 13 01:35:49.296216 env[1589]: time="2025-09-13T01:35:49.296174535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 01:35:49.314019 env[1589]: time="2025-09-13T01:35:49.313978921Z" level=info msg="CreateContainer within sandbox \"4c3a709df0f4ab3fadca13c159fb8895399a44dc7c3a1329459d9eff6c90e35f\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 13 01:35:49.349166 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1330958496.mount: Deactivated successfully. Sep 13 01:35:49.364205 env[1589]: time="2025-09-13T01:35:49.364142361Z" level=info msg="CreateContainer within sandbox \"4c3a709df0f4ab3fadca13c159fb8895399a44dc7c3a1329459d9eff6c90e35f\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a4d9362a7a6aefc4e3f22bb4dd76c596e6f669706ea1901731bd770b6adbc5f1\"" Sep 13 01:35:49.366688 env[1589]: time="2025-09-13T01:35:49.366591679Z" level=info msg="StartContainer for \"a4d9362a7a6aefc4e3f22bb4dd76c596e6f669706ea1901731bd770b6adbc5f1\"" Sep 13 01:35:49.435520 env[1589]: time="2025-09-13T01:35:49.435463745Z" level=info msg="StartContainer for \"a4d9362a7a6aefc4e3f22bb4dd76c596e6f669706ea1901731bd770b6adbc5f1\" returns successfully" Sep 13 01:35:49.638404 systemd-networkd[1771]: calia93641acd7b: Gained IPv6LL Sep 13 01:35:50.071142 kubelet[2688]: I0913 01:35:50.070637 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-68bc4666b6-lvk7q" podStartSLOduration=3.813426484 podStartE2EDuration="10.070616042s" podCreationTimestamp="2025-09-13 01:35:40 +0000 UTC" firstStartedPulling="2025-09-13 01:35:40.676052886 +0000 UTC m=+42.176891118" lastFinishedPulling="2025-09-13 01:35:46.933242444 +0000 UTC m=+48.434080676" observedRunningTime="2025-09-13 01:35:48.059352446 +0000 UTC m=+49.560190678" watchObservedRunningTime="2025-09-13 01:35:50.070616042 +0000 UTC m=+51.571454274" Sep 13 01:35:50.112584 kubelet[2688]: I0913 01:35:50.112520 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-84d8c5c799-tjhw6" podStartSLOduration=26.184439431 podStartE2EDuration="31.11250141s" podCreationTimestamp="2025-09-13 01:35:19 +0000 UTC" firstStartedPulling="2025-09-13 01:35:44.367828756 +0000 UTC m=+45.868666948" lastFinishedPulling="2025-09-13 01:35:49.295890735 +0000 UTC m=+50.796728927" observedRunningTime="2025-09-13 01:35:50.071948681 +0000 UTC m=+51.572786913" watchObservedRunningTime="2025-09-13 01:35:50.11250141 +0000 UTC m=+51.613339642" Sep 13 01:35:52.525298 env[1589]: time="2025-09-13T01:35:52.525258067Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:35:52.533755 env[1589]: time="2025-09-13T01:35:52.533715101Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:35:52.538523 env[1589]: time="2025-09-13T01:35:52.538485937Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:35:52.544290 env[1589]: time="2025-09-13T01:35:52.544252893Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:35:52.545669 env[1589]: time="2025-09-13T01:35:52.545171852Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 13 01:35:52.548585 env[1589]: time="2025-09-13T01:35:52.548556290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 13 01:35:52.549854 env[1589]: time="2025-09-13T01:35:52.549808649Z" level=info msg="CreateContainer within sandbox \"416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 01:35:52.588245 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2985099274.mount: Deactivated successfully. Sep 13 01:35:52.602115 env[1589]: time="2025-09-13T01:35:52.602063449Z" level=info msg="CreateContainer within sandbox \"416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f60a5b1fb0686ffdb01c97e8e14a74a54482bae101dd5f412d10140eaab467ac\"" Sep 13 01:35:52.602830 env[1589]: time="2025-09-13T01:35:52.602708208Z" level=info msg="StartContainer for \"f60a5b1fb0686ffdb01c97e8e14a74a54482bae101dd5f412d10140eaab467ac\"" Sep 13 01:35:52.636596 systemd[1]: run-containerd-runc-k8s.io-f60a5b1fb0686ffdb01c97e8e14a74a54482bae101dd5f412d10140eaab467ac-runc.GTuzIa.mount: Deactivated successfully. Sep 13 01:35:52.692800 env[1589]: time="2025-09-13T01:35:52.692751900Z" level=info msg="StartContainer for \"f60a5b1fb0686ffdb01c97e8e14a74a54482bae101dd5f412d10140eaab467ac\" returns successfully" Sep 13 01:35:53.109234 kernel: kauditd_printk_skb: 17 callbacks suppressed Sep 13 01:35:53.109422 kernel: audit: type=1325 audit(1757727353.101:438): table=filter:130 family=2 entries=12 op=nft_register_rule pid=5278 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:35:53.101000 audit[5278]: NETFILTER_CFG table=filter:130 family=2 entries=12 op=nft_register_rule pid=5278 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:35:53.101000 audit[5278]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=fffff36b5e90 a2=0 a3=1 items=0 ppid=2833 pid=5278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:53.164160 kernel: audit: type=1300 audit(1757727353.101:438): arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=fffff36b5e90 a2=0 a3=1 items=0 ppid=2833 pid=5278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:53.101000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:35:53.179954 kernel: audit: type=1327 audit(1757727353.101:438): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:35:53.135000 audit[5278]: NETFILTER_CFG table=nat:131 family=2 entries=22 op=nft_register_rule pid=5278 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:35:53.199859 kernel: audit: type=1325 audit(1757727353.135:439): table=nat:131 family=2 entries=22 op=nft_register_rule pid=5278 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:35:53.135000 audit[5278]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6540 a0=3 a1=fffff36b5e90 a2=0 a3=1 items=0 ppid=2833 pid=5278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:53.229467 kernel: audit: type=1300 audit(1757727353.135:439): arch=c00000b7 syscall=211 success=yes exit=6540 a0=3 a1=fffff36b5e90 a2=0 a3=1 items=0 ppid=2833 pid=5278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:53.135000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:35:53.244445 kernel: audit: type=1327 audit(1757727353.135:439): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:35:55.066237 kubelet[2688]: I0913 01:35:55.066199 2688 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 01:35:55.270631 kubelet[2688]: I0913 01:35:55.223677 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-948d44db6-dx4zf" podStartSLOduration=32.116115538 podStartE2EDuration="40.223657534s" podCreationTimestamp="2025-09-13 01:35:15 +0000 UTC" firstStartedPulling="2025-09-13 01:35:44.439052535 +0000 UTC m=+45.939890767" lastFinishedPulling="2025-09-13 01:35:52.546594531 +0000 UTC m=+54.047432763" observedRunningTime="2025-09-13 01:35:53.073831811 +0000 UTC m=+54.574670043" watchObservedRunningTime="2025-09-13 01:35:55.223657534 +0000 UTC m=+56.724495726" Sep 13 01:35:55.079979 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2189377529.mount: Deactivated successfully. Sep 13 01:35:55.272000 audit[5283]: NETFILTER_CFG table=filter:132 family=2 entries=11 op=nft_register_rule pid=5283 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:35:55.272000 audit[5283]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3760 a0=3 a1=ffffef0e29a0 a2=0 a3=1 items=0 ppid=2833 pid=5283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:55.316672 kernel: audit: type=1325 audit(1757727355.272:440): table=filter:132 family=2 entries=11 op=nft_register_rule pid=5283 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:35:55.316855 kernel: audit: type=1300 audit(1757727355.272:440): arch=c00000b7 syscall=211 success=yes exit=3760 a0=3 a1=ffffef0e29a0 a2=0 a3=1 items=0 ppid=2833 pid=5283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:55.272000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:35:55.331887 kernel: audit: type=1327 audit(1757727355.272:440): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:35:55.337000 audit[5283]: NETFILTER_CFG table=nat:133 family=2 entries=29 op=nft_register_chain pid=5283 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:35:55.337000 audit[5283]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=10116 a0=3 a1=ffffef0e29a0 a2=0 a3=1 items=0 ppid=2833 pid=5283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:55.337000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:35:55.353204 kernel: audit: type=1325 audit(1757727355.337:441): table=nat:133 family=2 entries=29 op=nft_register_chain pid=5283 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:35:56.338196 env[1589]: time="2025-09-13T01:35:56.338124603Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:35:56.345668 env[1589]: time="2025-09-13T01:35:56.345618197Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:35:56.350497 env[1589]: time="2025-09-13T01:35:56.350463634Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:35:56.354840 env[1589]: time="2025-09-13T01:35:56.354797590Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:35:56.355735 env[1589]: time="2025-09-13T01:35:56.355697670Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 13 01:35:56.358274 env[1589]: time="2025-09-13T01:35:56.358160068Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 13 01:35:56.359079 env[1589]: time="2025-09-13T01:35:56.359039947Z" level=info msg="CreateContainer within sandbox \"0c577c4edccda0970c1c93abe14e13706145b9107ba821e85dc669b14621bbdf\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 13 01:35:56.395563 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1312785476.mount: Deactivated successfully. Sep 13 01:35:56.414648 env[1589]: time="2025-09-13T01:35:56.414601907Z" level=info msg="CreateContainer within sandbox \"0c577c4edccda0970c1c93abe14e13706145b9107ba821e85dc669b14621bbdf\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"cb27a2332b86ed9ee671eb70e17d874113a064251a6e5f2064065e67a9da954e\"" Sep 13 01:35:56.415553 env[1589]: time="2025-09-13T01:35:56.415522067Z" level=info msg="StartContainer for \"cb27a2332b86ed9ee671eb70e17d874113a064251a6e5f2064065e67a9da954e\"" Sep 13 01:35:57.479339 env[1589]: time="2025-09-13T01:35:57.479284424Z" level=info msg="StartContainer for \"cb27a2332b86ed9ee671eb70e17d874113a064251a6e5f2064065e67a9da954e\" returns successfully" Sep 13 01:35:57.516251 systemd[1]: run-containerd-runc-k8s.io-cb27a2332b86ed9ee671eb70e17d874113a064251a6e5f2064065e67a9da954e-runc.hayK7c.mount: Deactivated successfully. Sep 13 01:35:57.545000 audit[5337]: NETFILTER_CFG table=filter:134 family=2 entries=10 op=nft_register_rule pid=5337 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:35:57.545000 audit[5337]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3760 a0=3 a1=fffffed420b0 a2=0 a3=1 items=0 ppid=2833 pid=5337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:57.545000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:35:57.562000 audit[5337]: NETFILTER_CFG table=nat:135 family=2 entries=24 op=nft_register_rule pid=5337 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:35:57.562000 audit[5337]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7308 a0=3 a1=fffffed420b0 a2=0 a3=1 items=0 ppid=2833 pid=5337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:57.562000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:35:58.435920 env[1589]: time="2025-09-13T01:35:58.435868546Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:35:58.443470 env[1589]: time="2025-09-13T01:35:58.443423021Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:35:58.448171 env[1589]: time="2025-09-13T01:35:58.448123537Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:35:58.452853 env[1589]: time="2025-09-13T01:35:58.452807934Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:35:58.453537 env[1589]: time="2025-09-13T01:35:58.453502894Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 13 01:35:58.456759 env[1589]: time="2025-09-13T01:35:58.456616891Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 01:35:58.457624 env[1589]: time="2025-09-13T01:35:58.457594571Z" level=info msg="CreateContainer within sandbox \"f7df422889549a38e3a77aa40d7bf2af4be070fe6f91662211c411a2ed3b12ce\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 13 01:35:58.496909 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1549084231.mount: Deactivated successfully. Sep 13 01:35:58.529704 systemd[1]: run-containerd-runc-k8s.io-cb27a2332b86ed9ee671eb70e17d874113a064251a6e5f2064065e67a9da954e-runc.9MMiPo.mount: Deactivated successfully. Sep 13 01:35:58.531015 env[1589]: time="2025-09-13T01:35:58.530525919Z" level=info msg="CreateContainer within sandbox \"f7df422889549a38e3a77aa40d7bf2af4be070fe6f91662211c411a2ed3b12ce\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"3281a8dc468c23eb76c94dbbfaa64732acb1d58f8f76888e06df2768c33fb847\"" Sep 13 01:35:58.532414 env[1589]: time="2025-09-13T01:35:58.531412239Z" level=info msg="StartContainer for \"3281a8dc468c23eb76c94dbbfaa64732acb1d58f8f76888e06df2768c33fb847\"" Sep 13 01:35:58.702926 env[1589]: time="2025-09-13T01:35:58.702832518Z" level=info msg="StartContainer for \"3281a8dc468c23eb76c94dbbfaa64732acb1d58f8f76888e06df2768c33fb847\" returns successfully" Sep 13 01:35:58.732005 env[1589]: time="2025-09-13T01:35:58.731969178Z" level=info msg="StopPodSandbox for \"b07d8d5fac065df8906204b4ac57ed114b4c9c0be4ad0bffbba4566fa4449be1\"" Sep 13 01:35:58.803531 env[1589]: time="2025-09-13T01:35:58.803480487Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:35:58.804915 env[1589]: 2025-09-13 01:35:58.769 [WARNING][5406] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b07d8d5fac065df8906204b4ac57ed114b4c9c0be4ad0bffbba4566fa4449be1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--9d226ffbbf-k8s-calico--kube--controllers--84d8c5c799--tjhw6-eth0", GenerateName:"calico-kube-controllers-84d8c5c799-", Namespace:"calico-system", SelfLink:"", UID:"ccc7d775-178b-498c-91bf-f43ba47754ca", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 1, 35, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84d8c5c799", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-9d226ffbbf", ContainerID:"4c3a709df0f4ab3fadca13c159fb8895399a44dc7c3a1329459d9eff6c90e35f", Pod:"calico-kube-controllers-84d8c5c799-tjhw6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.44.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali51f5d94bb4b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 01:35:58.804915 env[1589]: 2025-09-13 01:35:58.769 [INFO][5406] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b07d8d5fac065df8906204b4ac57ed114b4c9c0be4ad0bffbba4566fa4449be1" Sep 13 01:35:58.804915 env[1589]: 2025-09-13 01:35:58.769 [INFO][5406] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b07d8d5fac065df8906204b4ac57ed114b4c9c0be4ad0bffbba4566fa4449be1" iface="eth0" netns="" Sep 13 01:35:58.804915 env[1589]: 2025-09-13 01:35:58.769 [INFO][5406] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b07d8d5fac065df8906204b4ac57ed114b4c9c0be4ad0bffbba4566fa4449be1" Sep 13 01:35:58.804915 env[1589]: 2025-09-13 01:35:58.769 [INFO][5406] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b07d8d5fac065df8906204b4ac57ed114b4c9c0be4ad0bffbba4566fa4449be1" Sep 13 01:35:58.804915 env[1589]: 2025-09-13 01:35:58.790 [INFO][5413] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b07d8d5fac065df8906204b4ac57ed114b4c9c0be4ad0bffbba4566fa4449be1" HandleID="k8s-pod-network.b07d8d5fac065df8906204b4ac57ed114b4c9c0be4ad0bffbba4566fa4449be1" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-calico--kube--controllers--84d8c5c799--tjhw6-eth0" Sep 13 01:35:58.804915 env[1589]: 2025-09-13 01:35:58.791 [INFO][5413] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:35:58.804915 env[1589]: 2025-09-13 01:35:58.791 [INFO][5413] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:35:58.804915 env[1589]: 2025-09-13 01:35:58.799 [WARNING][5413] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b07d8d5fac065df8906204b4ac57ed114b4c9c0be4ad0bffbba4566fa4449be1" HandleID="k8s-pod-network.b07d8d5fac065df8906204b4ac57ed114b4c9c0be4ad0bffbba4566fa4449be1" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-calico--kube--controllers--84d8c5c799--tjhw6-eth0" Sep 13 01:35:58.804915 env[1589]: 2025-09-13 01:35:58.799 [INFO][5413] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b07d8d5fac065df8906204b4ac57ed114b4c9c0be4ad0bffbba4566fa4449be1" HandleID="k8s-pod-network.b07d8d5fac065df8906204b4ac57ed114b4c9c0be4ad0bffbba4566fa4449be1" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-calico--kube--controllers--84d8c5c799--tjhw6-eth0" Sep 13 01:35:58.804915 env[1589]: 2025-09-13 01:35:58.801 [INFO][5413] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:35:58.804915 env[1589]: 2025-09-13 01:35:58.802 [INFO][5406] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b07d8d5fac065df8906204b4ac57ed114b4c9c0be4ad0bffbba4566fa4449be1" Sep 13 01:35:58.805386 env[1589]: time="2025-09-13T01:35:58.805356686Z" level=info msg="TearDown network for sandbox \"b07d8d5fac065df8906204b4ac57ed114b4c9c0be4ad0bffbba4566fa4449be1\" successfully" Sep 13 01:35:58.805447 env[1589]: time="2025-09-13T01:35:58.805431926Z" level=info msg="StopPodSandbox for \"b07d8d5fac065df8906204b4ac57ed114b4c9c0be4ad0bffbba4566fa4449be1\" returns successfully" Sep 13 01:35:58.806111 env[1589]: time="2025-09-13T01:35:58.806076525Z" level=info msg="RemovePodSandbox for \"b07d8d5fac065df8906204b4ac57ed114b4c9c0be4ad0bffbba4566fa4449be1\"" Sep 13 01:35:58.806166 env[1589]: time="2025-09-13T01:35:58.806119045Z" level=info msg="Forcibly stopping sandbox \"b07d8d5fac065df8906204b4ac57ed114b4c9c0be4ad0bffbba4566fa4449be1\"" Sep 13 01:35:58.812616 env[1589]: time="2025-09-13T01:35:58.812587041Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:35:58.823259 env[1589]: time="2025-09-13T01:35:58.823130033Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:35:58.829228 env[1589]: time="2025-09-13T01:35:58.829149429Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:35:58.829620 env[1589]: time="2025-09-13T01:35:58.829594109Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 13 01:35:58.831710 env[1589]: time="2025-09-13T01:35:58.831686027Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 01:35:58.834731 env[1589]: time="2025-09-13T01:35:58.834703665Z" level=info msg="CreateContainer within sandbox \"e7111adde459b0df00cc9f102b708766304dde3411dd7cb351da884b157ee1b0\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 01:35:58.894985 env[1589]: time="2025-09-13T01:35:58.894300623Z" level=info msg="CreateContainer within sandbox \"e7111adde459b0df00cc9f102b708766304dde3411dd7cb351da884b157ee1b0\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7e1e666ed93c9ffee4be1db0f7b827c2aa1b3b5aedf59c0d0ec79e2b1613c937\"" Sep 13 01:35:58.897411 env[1589]: time="2025-09-13T01:35:58.896420822Z" level=info msg="StartContainer for \"7e1e666ed93c9ffee4be1db0f7b827c2aa1b3b5aedf59c0d0ec79e2b1613c937\"" Sep 13 01:35:58.905647 env[1589]: 2025-09-13 01:35:58.866 [WARNING][5427] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b07d8d5fac065df8906204b4ac57ed114b4c9c0be4ad0bffbba4566fa4449be1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--9d226ffbbf-k8s-calico--kube--controllers--84d8c5c799--tjhw6-eth0", GenerateName:"calico-kube-controllers-84d8c5c799-", Namespace:"calico-system", SelfLink:"", UID:"ccc7d775-178b-498c-91bf-f43ba47754ca", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 1, 35, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84d8c5c799", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-9d226ffbbf", ContainerID:"4c3a709df0f4ab3fadca13c159fb8895399a44dc7c3a1329459d9eff6c90e35f", Pod:"calico-kube-controllers-84d8c5c799-tjhw6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.44.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali51f5d94bb4b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 01:35:58.905647 env[1589]: 2025-09-13 01:35:58.866 [INFO][5427] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b07d8d5fac065df8906204b4ac57ed114b4c9c0be4ad0bffbba4566fa4449be1" Sep 13 01:35:58.905647 env[1589]: 2025-09-13 01:35:58.866 [INFO][5427] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b07d8d5fac065df8906204b4ac57ed114b4c9c0be4ad0bffbba4566fa4449be1" iface="eth0" netns="" Sep 13 01:35:58.905647 env[1589]: 2025-09-13 01:35:58.866 [INFO][5427] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b07d8d5fac065df8906204b4ac57ed114b4c9c0be4ad0bffbba4566fa4449be1" Sep 13 01:35:58.905647 env[1589]: 2025-09-13 01:35:58.866 [INFO][5427] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b07d8d5fac065df8906204b4ac57ed114b4c9c0be4ad0bffbba4566fa4449be1" Sep 13 01:35:58.905647 env[1589]: 2025-09-13 01:35:58.888 [INFO][5434] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b07d8d5fac065df8906204b4ac57ed114b4c9c0be4ad0bffbba4566fa4449be1" HandleID="k8s-pod-network.b07d8d5fac065df8906204b4ac57ed114b4c9c0be4ad0bffbba4566fa4449be1" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-calico--kube--controllers--84d8c5c799--tjhw6-eth0" Sep 13 01:35:58.905647 env[1589]: 2025-09-13 01:35:58.889 [INFO][5434] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:35:58.905647 env[1589]: 2025-09-13 01:35:58.889 [INFO][5434] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:35:58.905647 env[1589]: 2025-09-13 01:35:58.900 [WARNING][5434] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b07d8d5fac065df8906204b4ac57ed114b4c9c0be4ad0bffbba4566fa4449be1" HandleID="k8s-pod-network.b07d8d5fac065df8906204b4ac57ed114b4c9c0be4ad0bffbba4566fa4449be1" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-calico--kube--controllers--84d8c5c799--tjhw6-eth0" Sep 13 01:35:58.905647 env[1589]: 2025-09-13 01:35:58.900 [INFO][5434] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b07d8d5fac065df8906204b4ac57ed114b4c9c0be4ad0bffbba4566fa4449be1" HandleID="k8s-pod-network.b07d8d5fac065df8906204b4ac57ed114b4c9c0be4ad0bffbba4566fa4449be1" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-calico--kube--controllers--84d8c5c799--tjhw6-eth0" Sep 13 01:35:58.905647 env[1589]: 2025-09-13 01:35:58.902 [INFO][5434] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:35:58.905647 env[1589]: 2025-09-13 01:35:58.903 [INFO][5427] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b07d8d5fac065df8906204b4ac57ed114b4c9c0be4ad0bffbba4566fa4449be1" Sep 13 01:35:58.906047 env[1589]: time="2025-09-13T01:35:58.905673775Z" level=info msg="TearDown network for sandbox \"b07d8d5fac065df8906204b4ac57ed114b4c9c0be4ad0bffbba4566fa4449be1\" successfully" Sep 13 01:35:58.920929 env[1589]: time="2025-09-13T01:35:58.920881965Z" level=info msg="RemovePodSandbox \"b07d8d5fac065df8906204b4ac57ed114b4c9c0be4ad0bffbba4566fa4449be1\" returns successfully" Sep 13 01:35:58.921547 env[1589]: time="2025-09-13T01:35:58.921518924Z" level=info msg="StopPodSandbox for \"7a8a6757ecb67bf01ea0b9f30ef8745b147d91eb7c504386679114bae63fa6c4\"" Sep 13 01:35:58.992604 env[1589]: time="2025-09-13T01:35:58.991211035Z" level=info msg="StartContainer for \"7e1e666ed93c9ffee4be1db0f7b827c2aa1b3b5aedf59c0d0ec79e2b1613c937\" returns successfully" Sep 13 01:35:59.045618 env[1589]: 2025-09-13 01:35:59.002 [WARNING][5471] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7a8a6757ecb67bf01ea0b9f30ef8745b147d91eb7c504386679114bae63fa6c4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--9d226ffbbf-k8s-coredns--7c65d6cfc9--jxjcc-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"4867af55-bd3a-4712-8e0f-bf048a45159f", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 1, 35, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-9d226ffbbf", ContainerID:"27d821605e8dc1712d276ce9a87cb3cba3eb4e1af0a47cb231ed28b82983c1e9", Pod:"coredns-7c65d6cfc9-jxjcc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid929f1b0d49", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 01:35:59.045618 env[1589]: 2025-09-13 01:35:59.002 [INFO][5471] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7a8a6757ecb67bf01ea0b9f30ef8745b147d91eb7c504386679114bae63fa6c4" Sep 13 01:35:59.045618 env[1589]: 2025-09-13 01:35:59.002 [INFO][5471] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7a8a6757ecb67bf01ea0b9f30ef8745b147d91eb7c504386679114bae63fa6c4" iface="eth0" netns="" Sep 13 01:35:59.045618 env[1589]: 2025-09-13 01:35:59.002 [INFO][5471] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7a8a6757ecb67bf01ea0b9f30ef8745b147d91eb7c504386679114bae63fa6c4" Sep 13 01:35:59.045618 env[1589]: 2025-09-13 01:35:59.002 [INFO][5471] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7a8a6757ecb67bf01ea0b9f30ef8745b147d91eb7c504386679114bae63fa6c4" Sep 13 01:35:59.045618 env[1589]: 2025-09-13 01:35:59.032 [INFO][5493] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7a8a6757ecb67bf01ea0b9f30ef8745b147d91eb7c504386679114bae63fa6c4" HandleID="k8s-pod-network.7a8a6757ecb67bf01ea0b9f30ef8745b147d91eb7c504386679114bae63fa6c4" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-coredns--7c65d6cfc9--jxjcc-eth0" Sep 13 01:35:59.045618 env[1589]: 2025-09-13 01:35:59.032 [INFO][5493] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:35:59.045618 env[1589]: 2025-09-13 01:35:59.032 [INFO][5493] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:35:59.045618 env[1589]: 2025-09-13 01:35:59.041 [WARNING][5493] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7a8a6757ecb67bf01ea0b9f30ef8745b147d91eb7c504386679114bae63fa6c4" HandleID="k8s-pod-network.7a8a6757ecb67bf01ea0b9f30ef8745b147d91eb7c504386679114bae63fa6c4" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-coredns--7c65d6cfc9--jxjcc-eth0" Sep 13 01:35:59.045618 env[1589]: 2025-09-13 01:35:59.041 [INFO][5493] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7a8a6757ecb67bf01ea0b9f30ef8745b147d91eb7c504386679114bae63fa6c4" HandleID="k8s-pod-network.7a8a6757ecb67bf01ea0b9f30ef8745b147d91eb7c504386679114bae63fa6c4" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-coredns--7c65d6cfc9--jxjcc-eth0" Sep 13 01:35:59.045618 env[1589]: 2025-09-13 01:35:59.043 [INFO][5493] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:35:59.045618 env[1589]: 2025-09-13 01:35:59.044 [INFO][5471] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7a8a6757ecb67bf01ea0b9f30ef8745b147d91eb7c504386679114bae63fa6c4" Sep 13 01:35:59.046147 env[1589]: time="2025-09-13T01:35:59.046113517Z" level=info msg="TearDown network for sandbox \"7a8a6757ecb67bf01ea0b9f30ef8745b147d91eb7c504386679114bae63fa6c4\" successfully" Sep 13 01:35:59.046232 env[1589]: time="2025-09-13T01:35:59.046213757Z" level=info msg="StopPodSandbox for \"7a8a6757ecb67bf01ea0b9f30ef8745b147d91eb7c504386679114bae63fa6c4\" returns successfully" Sep 13 01:35:59.048058 env[1589]: time="2025-09-13T01:35:59.048005276Z" level=info msg="RemovePodSandbox for \"7a8a6757ecb67bf01ea0b9f30ef8745b147d91eb7c504386679114bae63fa6c4\"" Sep 13 01:35:59.048136 env[1589]: time="2025-09-13T01:35:59.048059116Z" level=info msg="Forcibly stopping sandbox \"7a8a6757ecb67bf01ea0b9f30ef8745b147d91eb7c504386679114bae63fa6c4\"" Sep 13 01:35:59.175250 env[1589]: 2025-09-13 01:35:59.108 [WARNING][5509] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7a8a6757ecb67bf01ea0b9f30ef8745b147d91eb7c504386679114bae63fa6c4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--9d226ffbbf-k8s-coredns--7c65d6cfc9--jxjcc-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"4867af55-bd3a-4712-8e0f-bf048a45159f", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 1, 35, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-9d226ffbbf", ContainerID:"27d821605e8dc1712d276ce9a87cb3cba3eb4e1af0a47cb231ed28b82983c1e9", Pod:"coredns-7c65d6cfc9-jxjcc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid929f1b0d49", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 01:35:59.175250 env[1589]: 2025-09-13 01:35:59.108 [INFO][5509] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7a8a6757ecb67bf01ea0b9f30ef8745b147d91eb7c504386679114bae63fa6c4" Sep 13 01:35:59.175250 env[1589]: 2025-09-13 01:35:59.108 [INFO][5509] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7a8a6757ecb67bf01ea0b9f30ef8745b147d91eb7c504386679114bae63fa6c4" iface="eth0" netns="" Sep 13 01:35:59.175250 env[1589]: 2025-09-13 01:35:59.108 [INFO][5509] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7a8a6757ecb67bf01ea0b9f30ef8745b147d91eb7c504386679114bae63fa6c4" Sep 13 01:35:59.175250 env[1589]: 2025-09-13 01:35:59.108 [INFO][5509] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7a8a6757ecb67bf01ea0b9f30ef8745b147d91eb7c504386679114bae63fa6c4" Sep 13 01:35:59.175250 env[1589]: 2025-09-13 01:35:59.159 [INFO][5518] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7a8a6757ecb67bf01ea0b9f30ef8745b147d91eb7c504386679114bae63fa6c4" HandleID="k8s-pod-network.7a8a6757ecb67bf01ea0b9f30ef8745b147d91eb7c504386679114bae63fa6c4" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-coredns--7c65d6cfc9--jxjcc-eth0" Sep 13 01:35:59.175250 env[1589]: 2025-09-13 01:35:59.159 [INFO][5518] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:35:59.175250 env[1589]: 2025-09-13 01:35:59.159 [INFO][5518] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:35:59.175250 env[1589]: 2025-09-13 01:35:59.168 [WARNING][5518] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7a8a6757ecb67bf01ea0b9f30ef8745b147d91eb7c504386679114bae63fa6c4" HandleID="k8s-pod-network.7a8a6757ecb67bf01ea0b9f30ef8745b147d91eb7c504386679114bae63fa6c4" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-coredns--7c65d6cfc9--jxjcc-eth0" Sep 13 01:35:59.175250 env[1589]: 2025-09-13 01:35:59.168 [INFO][5518] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7a8a6757ecb67bf01ea0b9f30ef8745b147d91eb7c504386679114bae63fa6c4" HandleID="k8s-pod-network.7a8a6757ecb67bf01ea0b9f30ef8745b147d91eb7c504386679114bae63fa6c4" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-coredns--7c65d6cfc9--jxjcc-eth0" Sep 13 01:35:59.175250 env[1589]: 2025-09-13 01:35:59.169 [INFO][5518] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:35:59.175250 env[1589]: 2025-09-13 01:35:59.173 [INFO][5509] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7a8a6757ecb67bf01ea0b9f30ef8745b147d91eb7c504386679114bae63fa6c4" Sep 13 01:35:59.175689 env[1589]: time="2025-09-13T01:35:59.175278947Z" level=info msg="TearDown network for sandbox \"7a8a6757ecb67bf01ea0b9f30ef8745b147d91eb7c504386679114bae63fa6c4\" successfully" Sep 13 01:35:59.184508 env[1589]: time="2025-09-13T01:35:59.184464701Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:35:59.193408 env[1589]: time="2025-09-13T01:35:59.193363854Z" level=info msg="RemovePodSandbox \"7a8a6757ecb67bf01ea0b9f30ef8745b147d91eb7c504386679114bae63fa6c4\" returns successfully" Sep 13 01:35:59.194073 env[1589]: time="2025-09-13T01:35:59.194048694Z" level=info msg="StopPodSandbox for \"aa87d586d2429d57663c418daf2c06b2229a62b7799ff27d3300900147b420a7\"" Sep 13 01:35:59.199006 env[1589]: time="2025-09-13T01:35:59.198975691Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:35:59.203284 env[1589]: time="2025-09-13T01:35:59.203252408Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:35:59.209642 env[1589]: time="2025-09-13T01:35:59.209608963Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:35:59.210001 env[1589]: time="2025-09-13T01:35:59.209968723Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 13 01:35:59.212271 env[1589]: time="2025-09-13T01:35:59.212240441Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 13 01:35:59.218209 env[1589]: time="2025-09-13T01:35:59.217241158Z" level=info msg="CreateContainer within sandbox \"b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 01:35:59.256747 env[1589]: time="2025-09-13T01:35:59.256651770Z" level=info msg="CreateContainer within sandbox \"b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"26e2c3a8d1b553f2cafb412c112af8aa79bd8b682fa6c0c2b1b2909dfd62f666\"" Sep 13 01:35:59.258392 env[1589]: time="2025-09-13T01:35:59.257620090Z" level=info msg="StartContainer for \"26e2c3a8d1b553f2cafb412c112af8aa79bd8b682fa6c0c2b1b2909dfd62f666\"" Sep 13 01:35:59.336696 env[1589]: 2025-09-13 01:35:59.273 [WARNING][5533] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aa87d586d2429d57663c418daf2c06b2229a62b7799ff27d3300900147b420a7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dqp2g-eth0", GenerateName:"calico-apiserver-948d44db6-", Namespace:"calico-apiserver", SelfLink:"", UID:"7bfd1f1e-041a-4f50-ba7b-87c117566d38", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 1, 35, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"948d44db6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-9d226ffbbf", ContainerID:"b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7", Pod:"calico-apiserver-948d44db6-dqp2g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.44.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia93641acd7b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 01:35:59.336696 env[1589]: 2025-09-13 01:35:59.275 [INFO][5533] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="aa87d586d2429d57663c418daf2c06b2229a62b7799ff27d3300900147b420a7" Sep 13 01:35:59.336696 env[1589]: 2025-09-13 01:35:59.275 [INFO][5533] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aa87d586d2429d57663c418daf2c06b2229a62b7799ff27d3300900147b420a7" iface="eth0" netns="" Sep 13 01:35:59.336696 env[1589]: 2025-09-13 01:35:59.275 [INFO][5533] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="aa87d586d2429d57663c418daf2c06b2229a62b7799ff27d3300900147b420a7" Sep 13 01:35:59.336696 env[1589]: 2025-09-13 01:35:59.275 [INFO][5533] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aa87d586d2429d57663c418daf2c06b2229a62b7799ff27d3300900147b420a7" Sep 13 01:35:59.336696 env[1589]: 2025-09-13 01:35:59.316 [INFO][5553] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aa87d586d2429d57663c418daf2c06b2229a62b7799ff27d3300900147b420a7" HandleID="k8s-pod-network.aa87d586d2429d57663c418daf2c06b2229a62b7799ff27d3300900147b420a7" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dqp2g-eth0" Sep 13 01:35:59.336696 env[1589]: 2025-09-13 01:35:59.316 [INFO][5553] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:35:59.336696 env[1589]: 2025-09-13 01:35:59.316 [INFO][5553] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:35:59.336696 env[1589]: 2025-09-13 01:35:59.328 [WARNING][5553] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aa87d586d2429d57663c418daf2c06b2229a62b7799ff27d3300900147b420a7" HandleID="k8s-pod-network.aa87d586d2429d57663c418daf2c06b2229a62b7799ff27d3300900147b420a7" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dqp2g-eth0" Sep 13 01:35:59.336696 env[1589]: 2025-09-13 01:35:59.328 [INFO][5553] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aa87d586d2429d57663c418daf2c06b2229a62b7799ff27d3300900147b420a7" HandleID="k8s-pod-network.aa87d586d2429d57663c418daf2c06b2229a62b7799ff27d3300900147b420a7" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dqp2g-eth0" Sep 13 01:35:59.336696 env[1589]: 2025-09-13 01:35:59.334 [INFO][5553] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:35:59.336696 env[1589]: 2025-09-13 01:35:59.335 [INFO][5533] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="aa87d586d2429d57663c418daf2c06b2229a62b7799ff27d3300900147b420a7" Sep 13 01:35:59.337285 env[1589]: time="2025-09-13T01:35:59.337245674Z" level=info msg="TearDown network for sandbox \"aa87d586d2429d57663c418daf2c06b2229a62b7799ff27d3300900147b420a7\" successfully" Sep 13 01:35:59.337370 env[1589]: time="2025-09-13T01:35:59.337351994Z" level=info msg="StopPodSandbox for \"aa87d586d2429d57663c418daf2c06b2229a62b7799ff27d3300900147b420a7\" returns successfully" Sep 13 01:35:59.337880 env[1589]: time="2025-09-13T01:35:59.337857074Z" level=info msg="RemovePodSandbox for \"aa87d586d2429d57663c418daf2c06b2229a62b7799ff27d3300900147b420a7\"" Sep 13 01:35:59.338101 env[1589]: time="2025-09-13T01:35:59.338062354Z" level=info msg="Forcibly stopping sandbox \"aa87d586d2429d57663c418daf2c06b2229a62b7799ff27d3300900147b420a7\"" Sep 13 01:35:59.457237 env[1589]: time="2025-09-13T01:35:59.457162831Z" level=info msg="StartContainer for \"26e2c3a8d1b553f2cafb412c112af8aa79bd8b682fa6c0c2b1b2909dfd62f666\" returns successfully" Sep 13 01:35:59.517135 systemd[1]: run-containerd-runc-k8s.io-3281a8dc468c23eb76c94dbbfaa64732acb1d58f8f76888e06df2768c33fb847-runc.2339r0.mount: Deactivated successfully. Sep 13 01:35:59.521525 kubelet[2688]: I0913 01:35:59.520601 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-2qlzv" podStartSLOduration=29.308194093 podStartE2EDuration="40.520585347s" podCreationTimestamp="2025-09-13 01:35:19 +0000 UTC" firstStartedPulling="2025-09-13 01:35:45.144883335 +0000 UTC m=+46.645721567" lastFinishedPulling="2025-09-13 01:35:56.357274589 +0000 UTC m=+57.858112821" observedRunningTime="2025-09-13 01:35:57.515810078 +0000 UTC m=+59.016648310" watchObservedRunningTime="2025-09-13 01:35:59.520585347 +0000 UTC m=+61.021423579" Sep 13 01:35:59.540765 env[1589]: 2025-09-13 01:35:59.392 [WARNING][5575] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aa87d586d2429d57663c418daf2c06b2229a62b7799ff27d3300900147b420a7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dqp2g-eth0", GenerateName:"calico-apiserver-948d44db6-", Namespace:"calico-apiserver", SelfLink:"", UID:"7bfd1f1e-041a-4f50-ba7b-87c117566d38", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 1, 35, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"948d44db6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-9d226ffbbf", ContainerID:"b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7", Pod:"calico-apiserver-948d44db6-dqp2g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.44.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia93641acd7b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 01:35:59.540765 env[1589]: 2025-09-13 01:35:59.392 [INFO][5575] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="aa87d586d2429d57663c418daf2c06b2229a62b7799ff27d3300900147b420a7" Sep 13 01:35:59.540765 env[1589]: 2025-09-13 01:35:59.392 [INFO][5575] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aa87d586d2429d57663c418daf2c06b2229a62b7799ff27d3300900147b420a7" iface="eth0" netns="" Sep 13 01:35:59.540765 env[1589]: 2025-09-13 01:35:59.393 [INFO][5575] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="aa87d586d2429d57663c418daf2c06b2229a62b7799ff27d3300900147b420a7" Sep 13 01:35:59.540765 env[1589]: 2025-09-13 01:35:59.393 [INFO][5575] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aa87d586d2429d57663c418daf2c06b2229a62b7799ff27d3300900147b420a7" Sep 13 01:35:59.540765 env[1589]: 2025-09-13 01:35:59.476 [INFO][5582] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aa87d586d2429d57663c418daf2c06b2229a62b7799ff27d3300900147b420a7" HandleID="k8s-pod-network.aa87d586d2429d57663c418daf2c06b2229a62b7799ff27d3300900147b420a7" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dqp2g-eth0" Sep 13 01:35:59.540765 env[1589]: 2025-09-13 01:35:59.476 [INFO][5582] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:35:59.540765 env[1589]: 2025-09-13 01:35:59.476 [INFO][5582] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:35:59.540765 env[1589]: 2025-09-13 01:35:59.485 [WARNING][5582] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aa87d586d2429d57663c418daf2c06b2229a62b7799ff27d3300900147b420a7" HandleID="k8s-pod-network.aa87d586d2429d57663c418daf2c06b2229a62b7799ff27d3300900147b420a7" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dqp2g-eth0" Sep 13 01:35:59.540765 env[1589]: 2025-09-13 01:35:59.485 [INFO][5582] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aa87d586d2429d57663c418daf2c06b2229a62b7799ff27d3300900147b420a7" HandleID="k8s-pod-network.aa87d586d2429d57663c418daf2c06b2229a62b7799ff27d3300900147b420a7" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dqp2g-eth0" Sep 13 01:35:59.540765 env[1589]: 2025-09-13 01:35:59.507 [INFO][5582] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:35:59.540765 env[1589]: 2025-09-13 01:35:59.537 [INFO][5575] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="aa87d586d2429d57663c418daf2c06b2229a62b7799ff27d3300900147b420a7" Sep 13 01:35:59.541792 env[1589]: time="2025-09-13T01:35:59.541619732Z" level=info msg="TearDown network for sandbox \"aa87d586d2429d57663c418daf2c06b2229a62b7799ff27d3300900147b420a7\" successfully" Sep 13 01:35:59.545712 kubelet[2688]: I0913 01:35:59.542598 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-948d44db6-dqp2g" podStartSLOduration=33.312373239 podStartE2EDuration="44.542578692s" podCreationTimestamp="2025-09-13 01:35:15 +0000 UTC" firstStartedPulling="2025-09-13 01:35:47.981202269 +0000 UTC m=+49.482040501" lastFinishedPulling="2025-09-13 01:35:59.211407722 +0000 UTC m=+60.712245954" observedRunningTime="2025-09-13 01:35:59.521054867 +0000 UTC m=+61.021893099" watchObservedRunningTime="2025-09-13 01:35:59.542578692 +0000 UTC m=+61.043416884" Sep 13 01:35:59.563909 env[1589]: time="2025-09-13T01:35:59.563865157Z" level=info msg="RemovePodSandbox \"aa87d586d2429d57663c418daf2c06b2229a62b7799ff27d3300900147b420a7\" returns successfully" Sep 13 01:35:59.564530 env[1589]: time="2025-09-13T01:35:59.564502956Z" level=info msg="StopPodSandbox for \"74e77a1cea79225691f9048a3084a72482df9836e108942842c2fec9c46b29b9\"" Sep 13 01:35:59.615000 audit[5627]: NETFILTER_CFG table=filter:136 family=2 entries=10 op=nft_register_rule pid=5627 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:35:59.628376 kernel: kauditd_printk_skb: 8 callbacks suppressed Sep 13 01:35:59.628481 kernel: audit: type=1325 audit(1757727359.615:444): table=filter:136 family=2 entries=10 op=nft_register_rule pid=5627 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:35:59.691842 kernel: audit: type=1300 audit(1757727359.615:444): arch=c00000b7 syscall=211 success=yes exit=3760 a0=3 a1=ffffde839e30 a2=0 a3=1 items=0 ppid=2833 pid=5627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:59.615000 audit[5627]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3760 a0=3 a1=ffffde839e30 a2=0 a3=1 items=0 ppid=2833 pid=5627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:59.615000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:35:59.709415 kernel: audit: type=1327 audit(1757727359.615:444): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:35:59.693000 audit[5627]: NETFILTER_CFG table=nat:137 family=2 entries=32 op=nft_register_rule pid=5627 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:35:59.730021 kernel: audit: type=1325 audit(1757727359.693:445): table=nat:137 family=2 entries=32 op=nft_register_rule pid=5627 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:35:59.693000 audit[5627]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=10116 a0=3 a1=ffffde839e30 a2=0 a3=1 items=0 ppid=2833 pid=5627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:59.759872 kernel: audit: type=1300 audit(1757727359.693:445): arch=c00000b7 syscall=211 success=yes exit=10116 a0=3 a1=ffffde839e30 a2=0 a3=1 items=0 ppid=2833 pid=5627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:59.693000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:35:59.782644 kernel: audit: type=1327 audit(1757727359.693:445): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:35:59.809000 audit[5640]: NETFILTER_CFG table=filter:138 family=2 entries=10 op=nft_register_rule pid=5640 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:35:59.827281 kernel: audit: type=1325 audit(1757727359.809:446): table=filter:138 family=2 entries=10 op=nft_register_rule pid=5640 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:35:59.809000 audit[5640]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3760 a0=3 a1=ffffd74b6b90 a2=0 a3=1 items=0 ppid=2833 pid=5640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:59.860283 kernel: audit: type=1300 audit(1757727359.809:446): arch=c00000b7 syscall=211 success=yes exit=3760 a0=3 a1=ffffd74b6b90 a2=0 a3=1 items=0 ppid=2833 pid=5640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:59.809000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:35:59.881999 env[1589]: 2025-09-13 01:35:59.656 [WARNING][5615] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="74e77a1cea79225691f9048a3084a72482df9836e108942842c2fec9c46b29b9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--cb6df9659--4gj4b-eth0", GenerateName:"calico-apiserver-cb6df9659-", Namespace:"calico-apiserver", SelfLink:"", UID:"3671d0d8-9f53-4f15-828c-de3c99528bf3", ResourceVersion:"1087", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 1, 35, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"cb6df9659", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-9d226ffbbf", ContainerID:"e7111adde459b0df00cc9f102b708766304dde3411dd7cb351da884b157ee1b0", Pod:"calico-apiserver-cb6df9659-4gj4b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.44.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic10603921ef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 01:35:59.881999 env[1589]: 2025-09-13 01:35:59.656 [INFO][5615] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="74e77a1cea79225691f9048a3084a72482df9836e108942842c2fec9c46b29b9" Sep 13 01:35:59.881999 env[1589]: 2025-09-13 01:35:59.656 [INFO][5615] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="74e77a1cea79225691f9048a3084a72482df9836e108942842c2fec9c46b29b9" iface="eth0" netns="" Sep 13 01:35:59.881999 env[1589]: 2025-09-13 01:35:59.657 [INFO][5615] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="74e77a1cea79225691f9048a3084a72482df9836e108942842c2fec9c46b29b9" Sep 13 01:35:59.881999 env[1589]: 2025-09-13 01:35:59.657 [INFO][5615] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="74e77a1cea79225691f9048a3084a72482df9836e108942842c2fec9c46b29b9" Sep 13 01:35:59.881999 env[1589]: 2025-09-13 01:35:59.764 [INFO][5630] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="74e77a1cea79225691f9048a3084a72482df9836e108942842c2fec9c46b29b9" HandleID="k8s-pod-network.74e77a1cea79225691f9048a3084a72482df9836e108942842c2fec9c46b29b9" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--cb6df9659--4gj4b-eth0" Sep 13 01:35:59.881999 env[1589]: 2025-09-13 01:35:59.773 [INFO][5630] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:35:59.881999 env[1589]: 2025-09-13 01:35:59.773 [INFO][5630] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:35:59.881999 env[1589]: 2025-09-13 01:35:59.843 [WARNING][5630] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="74e77a1cea79225691f9048a3084a72482df9836e108942842c2fec9c46b29b9" HandleID="k8s-pod-network.74e77a1cea79225691f9048a3084a72482df9836e108942842c2fec9c46b29b9" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--cb6df9659--4gj4b-eth0" Sep 13 01:35:59.881999 env[1589]: 2025-09-13 01:35:59.843 [INFO][5630] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="74e77a1cea79225691f9048a3084a72482df9836e108942842c2fec9c46b29b9" HandleID="k8s-pod-network.74e77a1cea79225691f9048a3084a72482df9836e108942842c2fec9c46b29b9" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--cb6df9659--4gj4b-eth0" Sep 13 01:35:59.881999 env[1589]: 2025-09-13 01:35:59.876 [INFO][5630] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:35:59.881999 env[1589]: 2025-09-13 01:35:59.880 [INFO][5615] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="74e77a1cea79225691f9048a3084a72482df9836e108942842c2fec9c46b29b9" Sep 13 01:35:59.882521 env[1589]: time="2025-09-13T01:35:59.882488175Z" level=info msg="TearDown network for sandbox \"74e77a1cea79225691f9048a3084a72482df9836e108942842c2fec9c46b29b9\" successfully" Sep 13 01:35:59.882591 env[1589]: time="2025-09-13T01:35:59.882574935Z" level=info msg="StopPodSandbox for \"74e77a1cea79225691f9048a3084a72482df9836e108942842c2fec9c46b29b9\" returns successfully" Sep 13 01:35:59.883102 env[1589]: time="2025-09-13T01:35:59.883065375Z" level=info msg="RemovePodSandbox for \"74e77a1cea79225691f9048a3084a72482df9836e108942842c2fec9c46b29b9\"" Sep 13 01:35:59.883202 env[1589]: time="2025-09-13T01:35:59.883105175Z" level=info msg="Forcibly stopping sandbox \"74e77a1cea79225691f9048a3084a72482df9836e108942842c2fec9c46b29b9\"" Sep 13 01:35:59.886249 kernel: audit: type=1327 audit(1757727359.809:446): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:35:59.912000 audit[5640]: NETFILTER_CFG table=nat:139 family=2 entries=32 op=nft_register_rule pid=5640 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:35:59.928212 kernel: audit: type=1325 audit(1757727359.912:447): table=nat:139 family=2 entries=32 op=nft_register_rule pid=5640 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:35:59.912000 audit[5640]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=10116 a0=3 a1=ffffd74b6b90 a2=0 a3=1 items=0 ppid=2833 pid=5640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:35:59.912000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:36:00.038089 env[1589]: 2025-09-13 01:35:59.955 [WARNING][5650] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="74e77a1cea79225691f9048a3084a72482df9836e108942842c2fec9c46b29b9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--cb6df9659--4gj4b-eth0", GenerateName:"calico-apiserver-cb6df9659-", Namespace:"calico-apiserver", SelfLink:"", UID:"3671d0d8-9f53-4f15-828c-de3c99528bf3", ResourceVersion:"1087", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 1, 35, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"cb6df9659", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-9d226ffbbf", ContainerID:"e7111adde459b0df00cc9f102b708766304dde3411dd7cb351da884b157ee1b0", Pod:"calico-apiserver-cb6df9659-4gj4b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.44.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic10603921ef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 01:36:00.038089 env[1589]: 2025-09-13 01:35:59.956 [INFO][5650] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="74e77a1cea79225691f9048a3084a72482df9836e108942842c2fec9c46b29b9" Sep 13 01:36:00.038089 env[1589]: 2025-09-13 01:35:59.956 [INFO][5650] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="74e77a1cea79225691f9048a3084a72482df9836e108942842c2fec9c46b29b9" iface="eth0" netns="" Sep 13 01:36:00.038089 env[1589]: 2025-09-13 01:35:59.956 [INFO][5650] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="74e77a1cea79225691f9048a3084a72482df9836e108942842c2fec9c46b29b9" Sep 13 01:36:00.038089 env[1589]: 2025-09-13 01:35:59.956 [INFO][5650] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="74e77a1cea79225691f9048a3084a72482df9836e108942842c2fec9c46b29b9" Sep 13 01:36:00.038089 env[1589]: 2025-09-13 01:35:59.984 [INFO][5657] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="74e77a1cea79225691f9048a3084a72482df9836e108942842c2fec9c46b29b9" HandleID="k8s-pod-network.74e77a1cea79225691f9048a3084a72482df9836e108942842c2fec9c46b29b9" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--cb6df9659--4gj4b-eth0" Sep 13 01:36:00.038089 env[1589]: 2025-09-13 01:35:59.984 [INFO][5657] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:36:00.038089 env[1589]: 2025-09-13 01:35:59.984 [INFO][5657] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:36:00.038089 env[1589]: 2025-09-13 01:36:00.013 [WARNING][5657] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="74e77a1cea79225691f9048a3084a72482df9836e108942842c2fec9c46b29b9" HandleID="k8s-pod-network.74e77a1cea79225691f9048a3084a72482df9836e108942842c2fec9c46b29b9" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--cb6df9659--4gj4b-eth0" Sep 13 01:36:00.038089 env[1589]: 2025-09-13 01:36:00.013 [INFO][5657] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="74e77a1cea79225691f9048a3084a72482df9836e108942842c2fec9c46b29b9" HandleID="k8s-pod-network.74e77a1cea79225691f9048a3084a72482df9836e108942842c2fec9c46b29b9" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--cb6df9659--4gj4b-eth0" Sep 13 01:36:00.038089 env[1589]: 2025-09-13 01:36:00.035 [INFO][5657] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:36:00.038089 env[1589]: 2025-09-13 01:36:00.036 [INFO][5650] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="74e77a1cea79225691f9048a3084a72482df9836e108942842c2fec9c46b29b9" Sep 13 01:36:00.038649 env[1589]: time="2025-09-13T01:36:00.038615507Z" level=info msg="TearDown network for sandbox \"74e77a1cea79225691f9048a3084a72482df9836e108942842c2fec9c46b29b9\" successfully" Sep 13 01:36:00.076816 env[1589]: time="2025-09-13T01:36:00.076666801Z" level=info msg="RemovePodSandbox \"74e77a1cea79225691f9048a3084a72482df9836e108942842c2fec9c46b29b9\" returns successfully" Sep 13 01:36:00.078567 env[1589]: time="2025-09-13T01:36:00.077471120Z" level=info msg="StopPodSandbox for \"3ada6d813106c69830ad8c1fd9059090e1365bc7e78192a38fc967fc1cbf08be\"" Sep 13 01:36:00.154996 env[1589]: 2025-09-13 01:36:00.116 [WARNING][5676] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3ada6d813106c69830ad8c1fd9059090e1365bc7e78192a38fc967fc1cbf08be" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--9d226ffbbf-k8s-csi--node--driver--vpflq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"77870db6-b52e-4395-a518-9c1b7d66eb0e", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 1, 35, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-9d226ffbbf", ContainerID:"f7df422889549a38e3a77aa40d7bf2af4be070fe6f91662211c411a2ed3b12ce", Pod:"csi-node-driver-vpflq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.44.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid79375b0230", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 01:36:00.154996 env[1589]: 2025-09-13 01:36:00.116 [INFO][5676] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3ada6d813106c69830ad8c1fd9059090e1365bc7e78192a38fc967fc1cbf08be" Sep 13 01:36:00.154996 env[1589]: 2025-09-13 01:36:00.116 [INFO][5676] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3ada6d813106c69830ad8c1fd9059090e1365bc7e78192a38fc967fc1cbf08be" iface="eth0" netns="" Sep 13 01:36:00.154996 env[1589]: 2025-09-13 01:36:00.116 [INFO][5676] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3ada6d813106c69830ad8c1fd9059090e1365bc7e78192a38fc967fc1cbf08be" Sep 13 01:36:00.154996 env[1589]: 2025-09-13 01:36:00.116 [INFO][5676] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3ada6d813106c69830ad8c1fd9059090e1365bc7e78192a38fc967fc1cbf08be" Sep 13 01:36:00.154996 env[1589]: 2025-09-13 01:36:00.141 [INFO][5685] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3ada6d813106c69830ad8c1fd9059090e1365bc7e78192a38fc967fc1cbf08be" HandleID="k8s-pod-network.3ada6d813106c69830ad8c1fd9059090e1365bc7e78192a38fc967fc1cbf08be" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-csi--node--driver--vpflq-eth0" Sep 13 01:36:00.154996 env[1589]: 2025-09-13 01:36:00.142 [INFO][5685] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:36:00.154996 env[1589]: 2025-09-13 01:36:00.142 [INFO][5685] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:36:00.154996 env[1589]: 2025-09-13 01:36:00.150 [WARNING][5685] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3ada6d813106c69830ad8c1fd9059090e1365bc7e78192a38fc967fc1cbf08be" HandleID="k8s-pod-network.3ada6d813106c69830ad8c1fd9059090e1365bc7e78192a38fc967fc1cbf08be" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-csi--node--driver--vpflq-eth0" Sep 13 01:36:00.154996 env[1589]: 2025-09-13 01:36:00.150 [INFO][5685] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3ada6d813106c69830ad8c1fd9059090e1365bc7e78192a38fc967fc1cbf08be" HandleID="k8s-pod-network.3ada6d813106c69830ad8c1fd9059090e1365bc7e78192a38fc967fc1cbf08be" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-csi--node--driver--vpflq-eth0" Sep 13 01:36:00.154996 env[1589]: 2025-09-13 01:36:00.151 [INFO][5685] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:36:00.154996 env[1589]: 2025-09-13 01:36:00.152 [INFO][5676] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3ada6d813106c69830ad8c1fd9059090e1365bc7e78192a38fc967fc1cbf08be" Sep 13 01:36:00.154996 env[1589]: time="2025-09-13T01:36:00.153784148Z" level=info msg="TearDown network for sandbox \"3ada6d813106c69830ad8c1fd9059090e1365bc7e78192a38fc967fc1cbf08be\" successfully" Sep 13 01:36:00.154996 env[1589]: time="2025-09-13T01:36:00.153814028Z" level=info msg="StopPodSandbox for \"3ada6d813106c69830ad8c1fd9059090e1365bc7e78192a38fc967fc1cbf08be\" returns successfully" Sep 13 01:36:00.156015 env[1589]: time="2025-09-13T01:36:00.155988066Z" level=info msg="RemovePodSandbox for \"3ada6d813106c69830ad8c1fd9059090e1365bc7e78192a38fc967fc1cbf08be\"" Sep 13 01:36:00.156141 env[1589]: time="2025-09-13T01:36:00.156104146Z" level=info msg="Forcibly stopping sandbox \"3ada6d813106c69830ad8c1fd9059090e1365bc7e78192a38fc967fc1cbf08be\"" Sep 13 01:36:00.252667 env[1589]: 2025-09-13 01:36:00.203 [WARNING][5700] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3ada6d813106c69830ad8c1fd9059090e1365bc7e78192a38fc967fc1cbf08be" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--9d226ffbbf-k8s-csi--node--driver--vpflq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"77870db6-b52e-4395-a518-9c1b7d66eb0e", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 1, 35, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-9d226ffbbf", ContainerID:"f7df422889549a38e3a77aa40d7bf2af4be070fe6f91662211c411a2ed3b12ce", Pod:"csi-node-driver-vpflq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.44.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid79375b0230", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 01:36:00.252667 env[1589]: 2025-09-13 01:36:00.204 [INFO][5700] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3ada6d813106c69830ad8c1fd9059090e1365bc7e78192a38fc967fc1cbf08be" Sep 13 01:36:00.252667 env[1589]: 2025-09-13 01:36:00.204 [INFO][5700] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3ada6d813106c69830ad8c1fd9059090e1365bc7e78192a38fc967fc1cbf08be" iface="eth0" netns="" Sep 13 01:36:00.252667 env[1589]: 2025-09-13 01:36:00.204 [INFO][5700] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3ada6d813106c69830ad8c1fd9059090e1365bc7e78192a38fc967fc1cbf08be" Sep 13 01:36:00.252667 env[1589]: 2025-09-13 01:36:00.204 [INFO][5700] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3ada6d813106c69830ad8c1fd9059090e1365bc7e78192a38fc967fc1cbf08be" Sep 13 01:36:00.252667 env[1589]: 2025-09-13 01:36:00.238 [INFO][5707] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3ada6d813106c69830ad8c1fd9059090e1365bc7e78192a38fc967fc1cbf08be" HandleID="k8s-pod-network.3ada6d813106c69830ad8c1fd9059090e1365bc7e78192a38fc967fc1cbf08be" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-csi--node--driver--vpflq-eth0" Sep 13 01:36:00.252667 env[1589]: 2025-09-13 01:36:00.238 [INFO][5707] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:36:00.252667 env[1589]: 2025-09-13 01:36:00.239 [INFO][5707] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:36:00.252667 env[1589]: 2025-09-13 01:36:00.248 [WARNING][5707] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3ada6d813106c69830ad8c1fd9059090e1365bc7e78192a38fc967fc1cbf08be" HandleID="k8s-pod-network.3ada6d813106c69830ad8c1fd9059090e1365bc7e78192a38fc967fc1cbf08be" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-csi--node--driver--vpflq-eth0" Sep 13 01:36:00.252667 env[1589]: 2025-09-13 01:36:00.248 [INFO][5707] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3ada6d813106c69830ad8c1fd9059090e1365bc7e78192a38fc967fc1cbf08be" HandleID="k8s-pod-network.3ada6d813106c69830ad8c1fd9059090e1365bc7e78192a38fc967fc1cbf08be" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-csi--node--driver--vpflq-eth0" Sep 13 01:36:00.252667 env[1589]: 2025-09-13 01:36:00.249 [INFO][5707] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:36:00.252667 env[1589]: 2025-09-13 01:36:00.250 [INFO][5700] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3ada6d813106c69830ad8c1fd9059090e1365bc7e78192a38fc967fc1cbf08be" Sep 13 01:36:00.253128 env[1589]: time="2025-09-13T01:36:00.252697800Z" level=info msg="TearDown network for sandbox \"3ada6d813106c69830ad8c1fd9059090e1365bc7e78192a38fc967fc1cbf08be\" successfully" Sep 13 01:36:00.264617 env[1589]: time="2025-09-13T01:36:00.264563512Z" level=info msg="RemovePodSandbox \"3ada6d813106c69830ad8c1fd9059090e1365bc7e78192a38fc967fc1cbf08be\" returns successfully" Sep 13 01:36:00.265219 env[1589]: time="2025-09-13T01:36:00.265159631Z" level=info msg="StopPodSandbox for \"e937a3a4e8ae545f5505b58e90df5c48b4537a09f9e7645597c293713bfc65bc\"" Sep 13 01:36:00.407392 env[1589]: 2025-09-13 01:36:00.322 [WARNING][5721] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e937a3a4e8ae545f5505b58e90df5c48b4537a09f9e7645597c293713bfc65bc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--9d226ffbbf-k8s-goldmane--7988f88666--2qlzv-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"e1b67e48-bb3b-4ff4-9ed0-8953d7a15f03", ResourceVersion:"1064", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 1, 35, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-9d226ffbbf", ContainerID:"0c577c4edccda0970c1c93abe14e13706145b9107ba821e85dc669b14621bbdf", Pod:"goldmane-7988f88666-2qlzv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.44.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib51311b43a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 01:36:00.407392 env[1589]: 2025-09-13 01:36:00.322 [INFO][5721] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e937a3a4e8ae545f5505b58e90df5c48b4537a09f9e7645597c293713bfc65bc" Sep 13 01:36:00.407392 env[1589]: 2025-09-13 01:36:00.322 [INFO][5721] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e937a3a4e8ae545f5505b58e90df5c48b4537a09f9e7645597c293713bfc65bc" iface="eth0" netns="" Sep 13 01:36:00.407392 env[1589]: 2025-09-13 01:36:00.322 [INFO][5721] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e937a3a4e8ae545f5505b58e90df5c48b4537a09f9e7645597c293713bfc65bc" Sep 13 01:36:00.407392 env[1589]: 2025-09-13 01:36:00.322 [INFO][5721] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e937a3a4e8ae545f5505b58e90df5c48b4537a09f9e7645597c293713bfc65bc" Sep 13 01:36:00.407392 env[1589]: 2025-09-13 01:36:00.360 [INFO][5728] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e937a3a4e8ae545f5505b58e90df5c48b4537a09f9e7645597c293713bfc65bc" HandleID="k8s-pod-network.e937a3a4e8ae545f5505b58e90df5c48b4537a09f9e7645597c293713bfc65bc" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-goldmane--7988f88666--2qlzv-eth0" Sep 13 01:36:00.407392 env[1589]: 2025-09-13 01:36:00.361 [INFO][5728] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:36:00.407392 env[1589]: 2025-09-13 01:36:00.361 [INFO][5728] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:36:00.407392 env[1589]: 2025-09-13 01:36:00.386 [WARNING][5728] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e937a3a4e8ae545f5505b58e90df5c48b4537a09f9e7645597c293713bfc65bc" HandleID="k8s-pod-network.e937a3a4e8ae545f5505b58e90df5c48b4537a09f9e7645597c293713bfc65bc" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-goldmane--7988f88666--2qlzv-eth0" Sep 13 01:36:00.407392 env[1589]: 2025-09-13 01:36:00.386 [INFO][5728] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e937a3a4e8ae545f5505b58e90df5c48b4537a09f9e7645597c293713bfc65bc" HandleID="k8s-pod-network.e937a3a4e8ae545f5505b58e90df5c48b4537a09f9e7645597c293713bfc65bc" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-goldmane--7988f88666--2qlzv-eth0" Sep 13 01:36:00.407392 env[1589]: 2025-09-13 01:36:00.400 [INFO][5728] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:36:00.407392 env[1589]: 2025-09-13 01:36:00.401 [INFO][5721] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e937a3a4e8ae545f5505b58e90df5c48b4537a09f9e7645597c293713bfc65bc" Sep 13 01:36:00.407392 env[1589]: time="2025-09-13T01:36:00.404709135Z" level=info msg="TearDown network for sandbox \"e937a3a4e8ae545f5505b58e90df5c48b4537a09f9e7645597c293713bfc65bc\" successfully" Sep 13 01:36:00.407392 env[1589]: time="2025-09-13T01:36:00.404738415Z" level=info msg="StopPodSandbox for \"e937a3a4e8ae545f5505b58e90df5c48b4537a09f9e7645597c293713bfc65bc\" returns successfully" Sep 13 01:36:00.408279 env[1589]: time="2025-09-13T01:36:00.408236813Z" level=info msg="RemovePodSandbox for \"e937a3a4e8ae545f5505b58e90df5c48b4537a09f9e7645597c293713bfc65bc\"" Sep 13 01:36:00.408361 env[1589]: time="2025-09-13T01:36:00.408281853Z" level=info msg="Forcibly stopping sandbox \"e937a3a4e8ae545f5505b58e90df5c48b4537a09f9e7645597c293713bfc65bc\"" Sep 13 01:36:00.522199 kubelet[2688]: I0913 01:36:00.521716 2688 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 01:36:00.714979 env[1589]: 2025-09-13 01:36:00.523 [WARNING][5743] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e937a3a4e8ae545f5505b58e90df5c48b4537a09f9e7645597c293713bfc65bc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--9d226ffbbf-k8s-goldmane--7988f88666--2qlzv-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"e1b67e48-bb3b-4ff4-9ed0-8953d7a15f03", ResourceVersion:"1064", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 1, 35, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-9d226ffbbf", ContainerID:"0c577c4edccda0970c1c93abe14e13706145b9107ba821e85dc669b14621bbdf", Pod:"goldmane-7988f88666-2qlzv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.44.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib51311b43a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 01:36:00.714979 env[1589]: 2025-09-13 01:36:00.523 [INFO][5743] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e937a3a4e8ae545f5505b58e90df5c48b4537a09f9e7645597c293713bfc65bc" Sep 13 01:36:00.714979 env[1589]: 2025-09-13 01:36:00.523 [INFO][5743] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e937a3a4e8ae545f5505b58e90df5c48b4537a09f9e7645597c293713bfc65bc" iface="eth0" netns="" Sep 13 01:36:00.714979 env[1589]: 2025-09-13 01:36:00.523 [INFO][5743] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e937a3a4e8ae545f5505b58e90df5c48b4537a09f9e7645597c293713bfc65bc" Sep 13 01:36:00.714979 env[1589]: 2025-09-13 01:36:00.523 [INFO][5743] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e937a3a4e8ae545f5505b58e90df5c48b4537a09f9e7645597c293713bfc65bc" Sep 13 01:36:00.714979 env[1589]: 2025-09-13 01:36:00.610 [INFO][5750] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e937a3a4e8ae545f5505b58e90df5c48b4537a09f9e7645597c293713bfc65bc" HandleID="k8s-pod-network.e937a3a4e8ae545f5505b58e90df5c48b4537a09f9e7645597c293713bfc65bc" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-goldmane--7988f88666--2qlzv-eth0" Sep 13 01:36:00.714979 env[1589]: 2025-09-13 01:36:00.613 [INFO][5750] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:36:00.714979 env[1589]: 2025-09-13 01:36:00.613 [INFO][5750] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:36:00.714979 env[1589]: 2025-09-13 01:36:00.666 [WARNING][5750] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e937a3a4e8ae545f5505b58e90df5c48b4537a09f9e7645597c293713bfc65bc" HandleID="k8s-pod-network.e937a3a4e8ae545f5505b58e90df5c48b4537a09f9e7645597c293713bfc65bc" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-goldmane--7988f88666--2qlzv-eth0" Sep 13 01:36:00.714979 env[1589]: 2025-09-13 01:36:00.666 [INFO][5750] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e937a3a4e8ae545f5505b58e90df5c48b4537a09f9e7645597c293713bfc65bc" HandleID="k8s-pod-network.e937a3a4e8ae545f5505b58e90df5c48b4537a09f9e7645597c293713bfc65bc" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-goldmane--7988f88666--2qlzv-eth0" Sep 13 01:36:00.714979 env[1589]: 2025-09-13 01:36:00.712 [INFO][5750] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:36:00.714979 env[1589]: 2025-09-13 01:36:00.713 [INFO][5743] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e937a3a4e8ae545f5505b58e90df5c48b4537a09f9e7645597c293713bfc65bc" Sep 13 01:36:00.719015 env[1589]: time="2025-09-13T01:36:00.718964639Z" level=info msg="TearDown network for sandbox \"e937a3a4e8ae545f5505b58e90df5c48b4537a09f9e7645597c293713bfc65bc\" successfully" Sep 13 01:36:00.733322 env[1589]: time="2025-09-13T01:36:00.733266469Z" level=info msg="RemovePodSandbox \"e937a3a4e8ae545f5505b58e90df5c48b4537a09f9e7645597c293713bfc65bc\" returns successfully" Sep 13 01:36:00.734385 env[1589]: time="2025-09-13T01:36:00.734354549Z" level=info msg="StopPodSandbox for \"f4d24476210993c3403cd8ff76dfe52d10ecb5b01f473c4ba33b9328b2701a33\"" Sep 13 01:36:00.997327 env[1589]: 2025-09-13 01:36:00.891 [WARNING][5783] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f4d24476210993c3403cd8ff76dfe52d10ecb5b01f473c4ba33b9328b2701a33" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dx4zf-eth0", GenerateName:"calico-apiserver-948d44db6-", Namespace:"calico-apiserver", SelfLink:"", UID:"42ab5820-ae33-4608-be70-9aaefef7b587", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 1, 35, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"948d44db6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-9d226ffbbf", ContainerID:"416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864", Pod:"calico-apiserver-948d44db6-dx4zf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.44.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6fc3f020bef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 01:36:00.997327 env[1589]: 2025-09-13 01:36:00.891 [INFO][5783] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f4d24476210993c3403cd8ff76dfe52d10ecb5b01f473c4ba33b9328b2701a33" Sep 13 01:36:00.997327 env[1589]: 2025-09-13 01:36:00.891 [INFO][5783] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f4d24476210993c3403cd8ff76dfe52d10ecb5b01f473c4ba33b9328b2701a33" iface="eth0" netns="" Sep 13 01:36:00.997327 env[1589]: 2025-09-13 01:36:00.891 [INFO][5783] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f4d24476210993c3403cd8ff76dfe52d10ecb5b01f473c4ba33b9328b2701a33" Sep 13 01:36:00.997327 env[1589]: 2025-09-13 01:36:00.891 [INFO][5783] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f4d24476210993c3403cd8ff76dfe52d10ecb5b01f473c4ba33b9328b2701a33" Sep 13 01:36:00.997327 env[1589]: 2025-09-13 01:36:00.973 [INFO][5801] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f4d24476210993c3403cd8ff76dfe52d10ecb5b01f473c4ba33b9328b2701a33" HandleID="k8s-pod-network.f4d24476210993c3403cd8ff76dfe52d10ecb5b01f473c4ba33b9328b2701a33" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dx4zf-eth0" Sep 13 01:36:00.997327 env[1589]: 2025-09-13 01:36:00.974 [INFO][5801] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:36:00.997327 env[1589]: 2025-09-13 01:36:00.974 [INFO][5801] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:36:00.997327 env[1589]: 2025-09-13 01:36:00.987 [WARNING][5801] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f4d24476210993c3403cd8ff76dfe52d10ecb5b01f473c4ba33b9328b2701a33" HandleID="k8s-pod-network.f4d24476210993c3403cd8ff76dfe52d10ecb5b01f473c4ba33b9328b2701a33" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dx4zf-eth0" Sep 13 01:36:00.997327 env[1589]: 2025-09-13 01:36:00.990 [INFO][5801] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f4d24476210993c3403cd8ff76dfe52d10ecb5b01f473c4ba33b9328b2701a33" HandleID="k8s-pod-network.f4d24476210993c3403cd8ff76dfe52d10ecb5b01f473c4ba33b9328b2701a33" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dx4zf-eth0" Sep 13 01:36:00.997327 env[1589]: 2025-09-13 01:36:00.994 [INFO][5801] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:36:00.997327 env[1589]: 2025-09-13 01:36:00.996 [INFO][5783] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f4d24476210993c3403cd8ff76dfe52d10ecb5b01f473c4ba33b9328b2701a33" Sep 13 01:36:00.997327 env[1589]: time="2025-09-13T01:36:00.997284168Z" level=info msg="TearDown network for sandbox \"f4d24476210993c3403cd8ff76dfe52d10ecb5b01f473c4ba33b9328b2701a33\" successfully" Sep 13 01:36:00.997327 env[1589]: time="2025-09-13T01:36:00.997315328Z" level=info msg="StopPodSandbox for \"f4d24476210993c3403cd8ff76dfe52d10ecb5b01f473c4ba33b9328b2701a33\" returns successfully" Sep 13 01:36:01.002540 env[1589]: time="2025-09-13T01:36:01.002492404Z" level=info msg="RemovePodSandbox for \"f4d24476210993c3403cd8ff76dfe52d10ecb5b01f473c4ba33b9328b2701a33\"" Sep 13 01:36:01.002683 env[1589]: time="2025-09-13T01:36:01.002537084Z" level=info msg="Forcibly stopping sandbox \"f4d24476210993c3403cd8ff76dfe52d10ecb5b01f473c4ba33b9328b2701a33\"" Sep 13 01:36:01.198076 env[1589]: 2025-09-13 01:36:01.096 [WARNING][5824] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f4d24476210993c3403cd8ff76dfe52d10ecb5b01f473c4ba33b9328b2701a33" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dx4zf-eth0", GenerateName:"calico-apiserver-948d44db6-", Namespace:"calico-apiserver", SelfLink:"", UID:"42ab5820-ae33-4608-be70-9aaefef7b587", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 1, 35, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"948d44db6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-9d226ffbbf", ContainerID:"416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864", Pod:"calico-apiserver-948d44db6-dx4zf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.44.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6fc3f020bef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 01:36:01.198076 env[1589]: 2025-09-13 01:36:01.096 [INFO][5824] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f4d24476210993c3403cd8ff76dfe52d10ecb5b01f473c4ba33b9328b2701a33" Sep 13 01:36:01.198076 env[1589]: 2025-09-13 01:36:01.096 [INFO][5824] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f4d24476210993c3403cd8ff76dfe52d10ecb5b01f473c4ba33b9328b2701a33" iface="eth0" netns="" Sep 13 01:36:01.198076 env[1589]: 2025-09-13 01:36:01.096 [INFO][5824] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f4d24476210993c3403cd8ff76dfe52d10ecb5b01f473c4ba33b9328b2701a33" Sep 13 01:36:01.198076 env[1589]: 2025-09-13 01:36:01.096 [INFO][5824] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f4d24476210993c3403cd8ff76dfe52d10ecb5b01f473c4ba33b9328b2701a33" Sep 13 01:36:01.198076 env[1589]: 2025-09-13 01:36:01.166 [INFO][5831] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f4d24476210993c3403cd8ff76dfe52d10ecb5b01f473c4ba33b9328b2701a33" HandleID="k8s-pod-network.f4d24476210993c3403cd8ff76dfe52d10ecb5b01f473c4ba33b9328b2701a33" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dx4zf-eth0" Sep 13 01:36:01.198076 env[1589]: 2025-09-13 01:36:01.170 [INFO][5831] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:36:01.198076 env[1589]: 2025-09-13 01:36:01.170 [INFO][5831] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:36:01.198076 env[1589]: 2025-09-13 01:36:01.192 [WARNING][5831] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f4d24476210993c3403cd8ff76dfe52d10ecb5b01f473c4ba33b9328b2701a33" HandleID="k8s-pod-network.f4d24476210993c3403cd8ff76dfe52d10ecb5b01f473c4ba33b9328b2701a33" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dx4zf-eth0" Sep 13 01:36:01.198076 env[1589]: 2025-09-13 01:36:01.192 [INFO][5831] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f4d24476210993c3403cd8ff76dfe52d10ecb5b01f473c4ba33b9328b2701a33" HandleID="k8s-pod-network.f4d24476210993c3403cd8ff76dfe52d10ecb5b01f473c4ba33b9328b2701a33" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dx4zf-eth0" Sep 13 01:36:01.198076 env[1589]: 2025-09-13 01:36:01.193 [INFO][5831] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:36:01.198076 env[1589]: 2025-09-13 01:36:01.195 [INFO][5824] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f4d24476210993c3403cd8ff76dfe52d10ecb5b01f473c4ba33b9328b2701a33" Sep 13 01:36:01.198533 env[1589]: time="2025-09-13T01:36:01.198104391Z" level=info msg="TearDown network for sandbox \"f4d24476210993c3403cd8ff76dfe52d10ecb5b01f473c4ba33b9328b2701a33\" successfully" Sep 13 01:36:01.209193 env[1589]: time="2025-09-13T01:36:01.209037584Z" level=info msg="RemovePodSandbox \"f4d24476210993c3403cd8ff76dfe52d10ecb5b01f473c4ba33b9328b2701a33\" returns successfully" Sep 13 01:36:01.209560 env[1589]: time="2025-09-13T01:36:01.209527264Z" level=info msg="StopPodSandbox for \"344f38d5a059f7e4592718e4e7ba8a28daaf8fa0cd9a811330fa9878a5a7b7b0\"" Sep 13 01:36:01.299716 env[1589]: time="2025-09-13T01:36:01.298836723Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:36:01.313407 env[1589]: time="2025-09-13T01:36:01.313356233Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:36:01.319981 env[1589]: time="2025-09-13T01:36:01.319930589Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:36:01.329623 env[1589]: time="2025-09-13T01:36:01.329578942Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:36:01.330112 env[1589]: time="2025-09-13T01:36:01.330082102Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 13 01:36:01.340132 env[1589]: time="2025-09-13T01:36:01.340086295Z" level=info msg="CreateContainer within sandbox \"f7df422889549a38e3a77aa40d7bf2af4be070fe6f91662211c411a2ed3b12ce\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 13 01:36:01.397515 env[1589]: time="2025-09-13T01:36:01.397462536Z" level=info msg="CreateContainer within sandbox \"f7df422889549a38e3a77aa40d7bf2af4be070fe6f91662211c411a2ed3b12ce\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"05b3ab9e7472ec1881ecf2c487fa895e66aea77557dd70d6a8e2e3de69bcc876\"" Sep 13 01:36:01.400904 env[1589]: time="2025-09-13T01:36:01.399443575Z" level=info msg="StartContainer for \"05b3ab9e7472ec1881ecf2c487fa895e66aea77557dd70d6a8e2e3de69bcc876\"" Sep 13 01:36:01.456551 env[1589]: 2025-09-13 01:36:01.319 [WARNING][5845] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="344f38d5a059f7e4592718e4e7ba8a28daaf8fa0cd9a811330fa9878a5a7b7b0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--9d226ffbbf-k8s-coredns--7c65d6cfc9--7rjjf-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"18937242-9ba8-46ed-bd00-32bfe4ab9056", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 1, 35, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-9d226ffbbf", ContainerID:"c62c376a5c77d2b35f664fec40a42c8dde6a124d088478329e2099a61c66e535", Pod:"coredns-7c65d6cfc9-7rjjf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia8b434c0622", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 01:36:01.456551 env[1589]: 2025-09-13 01:36:01.346 [INFO][5845] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="344f38d5a059f7e4592718e4e7ba8a28daaf8fa0cd9a811330fa9878a5a7b7b0" Sep 13 01:36:01.456551 env[1589]: 2025-09-13 01:36:01.346 [INFO][5845] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="344f38d5a059f7e4592718e4e7ba8a28daaf8fa0cd9a811330fa9878a5a7b7b0" iface="eth0" netns="" Sep 13 01:36:01.456551 env[1589]: 2025-09-13 01:36:01.346 [INFO][5845] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="344f38d5a059f7e4592718e4e7ba8a28daaf8fa0cd9a811330fa9878a5a7b7b0" Sep 13 01:36:01.456551 env[1589]: 2025-09-13 01:36:01.346 [INFO][5845] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="344f38d5a059f7e4592718e4e7ba8a28daaf8fa0cd9a811330fa9878a5a7b7b0" Sep 13 01:36:01.456551 env[1589]: 2025-09-13 01:36:01.403 [INFO][5852] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="344f38d5a059f7e4592718e4e7ba8a28daaf8fa0cd9a811330fa9878a5a7b7b0" HandleID="k8s-pod-network.344f38d5a059f7e4592718e4e7ba8a28daaf8fa0cd9a811330fa9878a5a7b7b0" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-coredns--7c65d6cfc9--7rjjf-eth0" Sep 13 01:36:01.456551 env[1589]: 2025-09-13 01:36:01.406 [INFO][5852] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:36:01.456551 env[1589]: 2025-09-13 01:36:01.406 [INFO][5852] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:36:01.456551 env[1589]: 2025-09-13 01:36:01.430 [WARNING][5852] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="344f38d5a059f7e4592718e4e7ba8a28daaf8fa0cd9a811330fa9878a5a7b7b0" HandleID="k8s-pod-network.344f38d5a059f7e4592718e4e7ba8a28daaf8fa0cd9a811330fa9878a5a7b7b0" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-coredns--7c65d6cfc9--7rjjf-eth0" Sep 13 01:36:01.456551 env[1589]: 2025-09-13 01:36:01.430 [INFO][5852] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="344f38d5a059f7e4592718e4e7ba8a28daaf8fa0cd9a811330fa9878a5a7b7b0" HandleID="k8s-pod-network.344f38d5a059f7e4592718e4e7ba8a28daaf8fa0cd9a811330fa9878a5a7b7b0" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-coredns--7c65d6cfc9--7rjjf-eth0" Sep 13 01:36:01.456551 env[1589]: 2025-09-13 01:36:01.432 [INFO][5852] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:36:01.456551 env[1589]: 2025-09-13 01:36:01.446 [INFO][5845] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="344f38d5a059f7e4592718e4e7ba8a28daaf8fa0cd9a811330fa9878a5a7b7b0" Sep 13 01:36:01.457058 env[1589]: time="2025-09-13T01:36:01.456575496Z" level=info msg="TearDown network for sandbox \"344f38d5a059f7e4592718e4e7ba8a28daaf8fa0cd9a811330fa9878a5a7b7b0\" successfully" Sep 13 01:36:01.457058 env[1589]: time="2025-09-13T01:36:01.456606936Z" level=info msg="StopPodSandbox for \"344f38d5a059f7e4592718e4e7ba8a28daaf8fa0cd9a811330fa9878a5a7b7b0\" returns successfully" Sep 13 01:36:01.457563 env[1589]: time="2025-09-13T01:36:01.457538535Z" level=info msg="RemovePodSandbox for \"344f38d5a059f7e4592718e4e7ba8a28daaf8fa0cd9a811330fa9878a5a7b7b0\"" Sep 13 01:36:01.457774 env[1589]: time="2025-09-13T01:36:01.457732335Z" level=info msg="Forcibly stopping sandbox \"344f38d5a059f7e4592718e4e7ba8a28daaf8fa0cd9a811330fa9878a5a7b7b0\"" Sep 13 01:36:01.584440 kubelet[2688]: I0913 01:36:01.584307 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-cb6df9659-4gj4b" podStartSLOduration=31.072532054 podStartE2EDuration="43.584289489s" podCreationTimestamp="2025-09-13 01:35:18 +0000 UTC" firstStartedPulling="2025-09-13 01:35:46.319798432 +0000 UTC m=+47.820636664" lastFinishedPulling="2025-09-13 01:35:58.831555867 +0000 UTC m=+60.332394099" observedRunningTime="2025-09-13 01:35:59.543128251 +0000 UTC m=+61.043966483" watchObservedRunningTime="2025-09-13 01:36:01.584289489 +0000 UTC m=+63.085127721" Sep 13 01:36:01.653378 env[1589]: time="2025-09-13T01:36:01.653316602Z" level=info msg="StartContainer for \"05b3ab9e7472ec1881ecf2c487fa895e66aea77557dd70d6a8e2e3de69bcc876\" returns successfully" Sep 13 01:36:01.655000 audit[5914]: NETFILTER_CFG table=filter:140 family=2 entries=9 op=nft_register_rule pid=5914 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:36:01.655000 audit[5914]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3016 a0=3 a1=ffffec193bc0 a2=0 a3=1 items=0 ppid=2833 pid=5914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:36:01.655000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:36:01.665000 audit[5914]: NETFILTER_CFG table=nat:141 family=2 entries=31 op=nft_register_chain pid=5914 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:36:01.672442 env[1589]: 2025-09-13 01:36:01.544 [WARNING][5886] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="344f38d5a059f7e4592718e4e7ba8a28daaf8fa0cd9a811330fa9878a5a7b7b0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--9d226ffbbf-k8s-coredns--7c65d6cfc9--7rjjf-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"18937242-9ba8-46ed-bd00-32bfe4ab9056", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 1, 35, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-9d226ffbbf", ContainerID:"c62c376a5c77d2b35f664fec40a42c8dde6a124d088478329e2099a61c66e535", Pod:"coredns-7c65d6cfc9-7rjjf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia8b434c0622", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 01:36:01.672442 env[1589]: 2025-09-13 01:36:01.544 [INFO][5886] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="344f38d5a059f7e4592718e4e7ba8a28daaf8fa0cd9a811330fa9878a5a7b7b0" Sep 13 01:36:01.672442 env[1589]: 2025-09-13 01:36:01.544 [INFO][5886] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="344f38d5a059f7e4592718e4e7ba8a28daaf8fa0cd9a811330fa9878a5a7b7b0" iface="eth0" netns="" Sep 13 01:36:01.672442 env[1589]: 2025-09-13 01:36:01.544 [INFO][5886] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="344f38d5a059f7e4592718e4e7ba8a28daaf8fa0cd9a811330fa9878a5a7b7b0" Sep 13 01:36:01.672442 env[1589]: 2025-09-13 01:36:01.544 [INFO][5886] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="344f38d5a059f7e4592718e4e7ba8a28daaf8fa0cd9a811330fa9878a5a7b7b0" Sep 13 01:36:01.672442 env[1589]: 2025-09-13 01:36:01.650 [INFO][5899] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="344f38d5a059f7e4592718e4e7ba8a28daaf8fa0cd9a811330fa9878a5a7b7b0" HandleID="k8s-pod-network.344f38d5a059f7e4592718e4e7ba8a28daaf8fa0cd9a811330fa9878a5a7b7b0" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-coredns--7c65d6cfc9--7rjjf-eth0" Sep 13 01:36:01.672442 env[1589]: 2025-09-13 01:36:01.650 [INFO][5899] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:36:01.672442 env[1589]: 2025-09-13 01:36:01.650 [INFO][5899] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:36:01.672442 env[1589]: 2025-09-13 01:36:01.666 [WARNING][5899] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="344f38d5a059f7e4592718e4e7ba8a28daaf8fa0cd9a811330fa9878a5a7b7b0" HandleID="k8s-pod-network.344f38d5a059f7e4592718e4e7ba8a28daaf8fa0cd9a811330fa9878a5a7b7b0" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-coredns--7c65d6cfc9--7rjjf-eth0" Sep 13 01:36:01.672442 env[1589]: 2025-09-13 01:36:01.667 [INFO][5899] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="344f38d5a059f7e4592718e4e7ba8a28daaf8fa0cd9a811330fa9878a5a7b7b0" HandleID="k8s-pod-network.344f38d5a059f7e4592718e4e7ba8a28daaf8fa0cd9a811330fa9878a5a7b7b0" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-coredns--7c65d6cfc9--7rjjf-eth0" Sep 13 01:36:01.672442 env[1589]: 2025-09-13 01:36:01.668 [INFO][5899] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:36:01.672442 env[1589]: 2025-09-13 01:36:01.669 [INFO][5886] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="344f38d5a059f7e4592718e4e7ba8a28daaf8fa0cd9a811330fa9878a5a7b7b0" Sep 13 01:36:01.672442 env[1589]: time="2025-09-13T01:36:01.670810510Z" level=info msg="TearDown network for sandbox \"344f38d5a059f7e4592718e4e7ba8a28daaf8fa0cd9a811330fa9878a5a7b7b0\" successfully" Sep 13 01:36:01.665000 audit[5914]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=10884 a0=3 a1=ffffec193bc0 a2=0 a3=1 items=0 ppid=2833 pid=5914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:36:01.665000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:36:01.679834 env[1589]: time="2025-09-13T01:36:01.679770304Z" level=info msg="RemovePodSandbox \"344f38d5a059f7e4592718e4e7ba8a28daaf8fa0cd9a811330fa9878a5a7b7b0\" returns successfully" Sep 13 01:36:01.680322 env[1589]: time="2025-09-13T01:36:01.680290584Z" level=info msg="StopPodSandbox for \"292dafe03472e7f18f72ce1ad965a65059f68326ea9991e6b6168772bc888450\"" Sep 13 01:36:01.799668 env[1589]: 2025-09-13 01:36:01.751 [WARNING][5925] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="292dafe03472e7f18f72ce1ad965a65059f68326ea9991e6b6168772bc888450" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-whisker--544dd58775--5nxng-eth0" Sep 13 01:36:01.799668 env[1589]: 2025-09-13 01:36:01.751 [INFO][5925] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="292dafe03472e7f18f72ce1ad965a65059f68326ea9991e6b6168772bc888450" Sep 13 01:36:01.799668 env[1589]: 2025-09-13 01:36:01.751 [INFO][5925] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="292dafe03472e7f18f72ce1ad965a65059f68326ea9991e6b6168772bc888450" iface="eth0" netns="" Sep 13 01:36:01.799668 env[1589]: 2025-09-13 01:36:01.751 [INFO][5925] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="292dafe03472e7f18f72ce1ad965a65059f68326ea9991e6b6168772bc888450" Sep 13 01:36:01.799668 env[1589]: 2025-09-13 01:36:01.751 [INFO][5925] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="292dafe03472e7f18f72ce1ad965a65059f68326ea9991e6b6168772bc888450" Sep 13 01:36:01.799668 env[1589]: 2025-09-13 01:36:01.785 [INFO][5935] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="292dafe03472e7f18f72ce1ad965a65059f68326ea9991e6b6168772bc888450" HandleID="k8s-pod-network.292dafe03472e7f18f72ce1ad965a65059f68326ea9991e6b6168772bc888450" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-whisker--544dd58775--5nxng-eth0" Sep 13 01:36:01.799668 env[1589]: 2025-09-13 01:36:01.786 [INFO][5935] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:36:01.799668 env[1589]: 2025-09-13 01:36:01.786 [INFO][5935] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:36:01.799668 env[1589]: 2025-09-13 01:36:01.794 [WARNING][5935] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="292dafe03472e7f18f72ce1ad965a65059f68326ea9991e6b6168772bc888450" HandleID="k8s-pod-network.292dafe03472e7f18f72ce1ad965a65059f68326ea9991e6b6168772bc888450" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-whisker--544dd58775--5nxng-eth0" Sep 13 01:36:01.799668 env[1589]: 2025-09-13 01:36:01.794 [INFO][5935] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="292dafe03472e7f18f72ce1ad965a65059f68326ea9991e6b6168772bc888450" HandleID="k8s-pod-network.292dafe03472e7f18f72ce1ad965a65059f68326ea9991e6b6168772bc888450" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-whisker--544dd58775--5nxng-eth0" Sep 13 01:36:01.799668 env[1589]: 2025-09-13 01:36:01.795 [INFO][5935] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:36:01.799668 env[1589]: 2025-09-13 01:36:01.798 [INFO][5925] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="292dafe03472e7f18f72ce1ad965a65059f68326ea9991e6b6168772bc888450" Sep 13 01:36:01.800372 env[1589]: time="2025-09-13T01:36:01.799698463Z" level=info msg="TearDown network for sandbox \"292dafe03472e7f18f72ce1ad965a65059f68326ea9991e6b6168772bc888450\" successfully" Sep 13 01:36:01.800372 env[1589]: time="2025-09-13T01:36:01.799732663Z" level=info msg="StopPodSandbox for \"292dafe03472e7f18f72ce1ad965a65059f68326ea9991e6b6168772bc888450\" returns successfully" Sep 13 01:36:01.800618 env[1589]: time="2025-09-13T01:36:01.800585662Z" level=info msg="RemovePodSandbox for \"292dafe03472e7f18f72ce1ad965a65059f68326ea9991e6b6168772bc888450\"" Sep 13 01:36:01.800759 env[1589]: time="2025-09-13T01:36:01.800701622Z" level=info msg="Forcibly stopping sandbox \"292dafe03472e7f18f72ce1ad965a65059f68326ea9991e6b6168772bc888450\"" Sep 13 01:36:01.890078 env[1589]: 2025-09-13 01:36:01.844 [WARNING][5951] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="292dafe03472e7f18f72ce1ad965a65059f68326ea9991e6b6168772bc888450" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-whisker--544dd58775--5nxng-eth0" Sep 13 01:36:01.890078 env[1589]: 2025-09-13 01:36:01.844 [INFO][5951] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="292dafe03472e7f18f72ce1ad965a65059f68326ea9991e6b6168772bc888450" Sep 13 01:36:01.890078 env[1589]: 2025-09-13 01:36:01.844 [INFO][5951] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="292dafe03472e7f18f72ce1ad965a65059f68326ea9991e6b6168772bc888450" iface="eth0" netns="" Sep 13 01:36:01.890078 env[1589]: 2025-09-13 01:36:01.844 [INFO][5951] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="292dafe03472e7f18f72ce1ad965a65059f68326ea9991e6b6168772bc888450" Sep 13 01:36:01.890078 env[1589]: 2025-09-13 01:36:01.844 [INFO][5951] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="292dafe03472e7f18f72ce1ad965a65059f68326ea9991e6b6168772bc888450" Sep 13 01:36:01.890078 env[1589]: 2025-09-13 01:36:01.875 [INFO][5958] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="292dafe03472e7f18f72ce1ad965a65059f68326ea9991e6b6168772bc888450" HandleID="k8s-pod-network.292dafe03472e7f18f72ce1ad965a65059f68326ea9991e6b6168772bc888450" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-whisker--544dd58775--5nxng-eth0" Sep 13 01:36:01.890078 env[1589]: 2025-09-13 01:36:01.875 [INFO][5958] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:36:01.890078 env[1589]: 2025-09-13 01:36:01.875 [INFO][5958] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:36:01.890078 env[1589]: 2025-09-13 01:36:01.885 [WARNING][5958] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="292dafe03472e7f18f72ce1ad965a65059f68326ea9991e6b6168772bc888450" HandleID="k8s-pod-network.292dafe03472e7f18f72ce1ad965a65059f68326ea9991e6b6168772bc888450" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-whisker--544dd58775--5nxng-eth0" Sep 13 01:36:01.890078 env[1589]: 2025-09-13 01:36:01.885 [INFO][5958] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="292dafe03472e7f18f72ce1ad965a65059f68326ea9991e6b6168772bc888450" HandleID="k8s-pod-network.292dafe03472e7f18f72ce1ad965a65059f68326ea9991e6b6168772bc888450" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-whisker--544dd58775--5nxng-eth0" Sep 13 01:36:01.890078 env[1589]: 2025-09-13 01:36:01.886 [INFO][5958] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:36:01.890078 env[1589]: 2025-09-13 01:36:01.888 [INFO][5951] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="292dafe03472e7f18f72ce1ad965a65059f68326ea9991e6b6168772bc888450" Sep 13 01:36:01.890078 env[1589]: time="2025-09-13T01:36:01.890055481Z" level=info msg="TearDown network for sandbox \"292dafe03472e7f18f72ce1ad965a65059f68326ea9991e6b6168772bc888450\" successfully" Sep 13 01:36:01.899725 env[1589]: time="2025-09-13T01:36:01.899664235Z" level=info msg="RemovePodSandbox \"292dafe03472e7f18f72ce1ad965a65059f68326ea9991e6b6168772bc888450\" returns successfully" Sep 13 01:36:01.923355 kubelet[2688]: I0913 01:36:01.923315 2688 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 13 01:36:01.925297 kubelet[2688]: I0913 01:36:01.925269 2688 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 13 01:36:03.819104 kubelet[2688]: I0913 01:36:03.819021 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-vpflq" podStartSLOduration=28.651842687 podStartE2EDuration="44.81900291s" podCreationTimestamp="2025-09-13 01:35:19 +0000 UTC" firstStartedPulling="2025-09-13 01:35:45.167458156 +0000 UTC m=+46.668296388" lastFinishedPulling="2025-09-13 01:36:01.334618379 +0000 UTC m=+62.835456611" observedRunningTime="2025-09-13 01:36:02.600686563 +0000 UTC m=+64.101524795" watchObservedRunningTime="2025-09-13 01:36:03.81900291 +0000 UTC m=+65.319841142" Sep 13 01:36:03.848000 audit[5972]: NETFILTER_CFG table=filter:142 family=2 entries=8 op=nft_register_rule pid=5972 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:36:03.848000 audit[5972]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3016 a0=3 a1=ffffcb5d6630 a2=0 a3=1 items=0 ppid=2833 pid=5972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:36:03.848000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:36:03.861000 audit[5972]: NETFILTER_CFG table=nat:143 family=2 entries=38 op=nft_register_chain pid=5972 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:36:03.861000 audit[5972]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=12772 a0=3 a1=ffffcb5d6630 a2=0 a3=1 items=0 ppid=2833 pid=5972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:36:03.861000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:36:07.736241 kubelet[2688]: I0913 01:36:07.736197 2688 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 01:36:07.802000 audit[5977]: NETFILTER_CFG table=filter:144 family=2 entries=8 op=nft_register_rule pid=5977 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:36:07.807796 kernel: kauditd_printk_skb: 14 callbacks suppressed Sep 13 01:36:07.807942 kernel: audit: type=1325 audit(1757727367.802:452): table=filter:144 family=2 entries=8 op=nft_register_rule pid=5977 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:36:07.821622 env[1589]: time="2025-09-13T01:36:07.821572074Z" level=info msg="StopContainer for \"26e2c3a8d1b553f2cafb412c112af8aa79bd8b682fa6c0c2b1b2909dfd62f666\" with timeout 30 (s)" Sep 13 01:36:07.822224 env[1589]: time="2025-09-13T01:36:07.822085834Z" level=info msg="Stop container \"26e2c3a8d1b553f2cafb412c112af8aa79bd8b682fa6c0c2b1b2909dfd62f666\" with signal terminated" Sep 13 01:36:07.802000 audit[5977]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3016 a0=3 a1=ffffc8fe4100 a2=0 a3=1 items=0 ppid=2833 pid=5977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:36:07.859198 kernel: audit: type=1300 audit(1757727367.802:452): arch=c00000b7 syscall=211 success=yes exit=3016 a0=3 a1=ffffc8fe4100 a2=0 a3=1 items=0 ppid=2833 pid=5977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:36:07.802000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:36:07.880437 kernel: audit: type=1327 audit(1757727367.802:452): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:36:07.847000 audit[5977]: NETFILTER_CFG table=nat:145 family=2 entries=44 op=nft_register_chain pid=5977 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:36:07.904982 kernel: audit: type=1325 audit(1757727367.847:453): table=nat:145 family=2 entries=44 op=nft_register_chain pid=5977 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:36:07.847000 audit[5977]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14660 a0=3 a1=ffffc8fe4100 a2=0 a3=1 items=0 ppid=2833 pid=5977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:36:07.940442 kernel: audit: type=1300 audit(1757727367.847:453): arch=c00000b7 syscall=211 success=yes exit=14660 a0=3 a1=ffffc8fe4100 a2=0 a3=1 items=0 ppid=2833 pid=5977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:36:07.847000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:36:07.987004 kernel: audit: type=1327 audit(1757727367.847:453): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:36:08.041227 kubelet[2688]: I0913 01:36:08.041175 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkg79\" (UniqueName: \"kubernetes.io/projected/c0bd22e9-031f-4435-b71b-54bfbd9273d5-kube-api-access-hkg79\") pod \"calico-apiserver-cb6df9659-jmvgv\" (UID: \"c0bd22e9-031f-4435-b71b-54bfbd9273d5\") " pod="calico-apiserver/calico-apiserver-cb6df9659-jmvgv" Sep 13 01:36:08.041402 kubelet[2688]: I0913 01:36:08.041235 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c0bd22e9-031f-4435-b71b-54bfbd9273d5-calico-apiserver-certs\") pod \"calico-apiserver-cb6df9659-jmvgv\" (UID: \"c0bd22e9-031f-4435-b71b-54bfbd9273d5\") " pod="calico-apiserver/calico-apiserver-cb6df9659-jmvgv" Sep 13 01:36:08.052000 audit[5986]: NETFILTER_CFG table=filter:146 family=2 entries=8 op=nft_register_rule pid=5986 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:36:08.078222 kernel: audit: type=1325 audit(1757727368.052:454): table=filter:146 family=2 entries=8 op=nft_register_rule pid=5986 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:36:08.052000 audit[5986]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3016 a0=3 a1=ffffd6fb5410 a2=0 a3=1 items=0 ppid=2833 pid=5986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:36:08.098688 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-26e2c3a8d1b553f2cafb412c112af8aa79bd8b682fa6c0c2b1b2909dfd62f666-rootfs.mount: Deactivated successfully. Sep 13 01:36:08.052000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:36:08.139138 kernel: audit: type=1300 audit(1757727368.052:454): arch=c00000b7 syscall=211 success=yes exit=3016 a0=3 a1=ffffd6fb5410 a2=0 a3=1 items=0 ppid=2833 pid=5986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:36:08.139269 kernel: audit: type=1327 audit(1757727368.052:454): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:36:08.140000 audit[5986]: NETFILTER_CFG table=nat:147 family=2 entries=44 op=nft_unregister_chain pid=5986 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:36:08.154839 kernel: audit: type=1325 audit(1757727368.140:455): table=nat:147 family=2 entries=44 op=nft_unregister_chain pid=5986 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:36:08.140000 audit[5986]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=12900 a0=3 a1=ffffd6fb5410 a2=0 a3=1 items=0 ppid=2833 pid=5986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:36:08.140000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:36:08.216932 env[1589]: time="2025-09-13T01:36:08.216891544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-cb6df9659-jmvgv,Uid:c0bd22e9-031f-4435-b71b-54bfbd9273d5,Namespace:calico-apiserver,Attempt:0,}" Sep 13 01:36:08.441466 env[1589]: time="2025-09-13T01:36:08.441408682Z" level=info msg="shim disconnected" id=26e2c3a8d1b553f2cafb412c112af8aa79bd8b682fa6c0c2b1b2909dfd62f666 Sep 13 01:36:08.441466 env[1589]: time="2025-09-13T01:36:08.441460962Z" level=warning msg="cleaning up after shim disconnected" id=26e2c3a8d1b553f2cafb412c112af8aa79bd8b682fa6c0c2b1b2909dfd62f666 namespace=k8s.io Sep 13 01:36:08.441466 env[1589]: time="2025-09-13T01:36:08.441469602Z" level=info msg="cleaning up dead shim" Sep 13 01:36:08.449551 env[1589]: time="2025-09-13T01:36:08.449504157Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:36:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6004 runtime=io.containerd.runc.v2\n" Sep 13 01:36:08.479110 env[1589]: time="2025-09-13T01:36:08.479053418Z" level=info msg="StopContainer for \"26e2c3a8d1b553f2cafb412c112af8aa79bd8b682fa6c0c2b1b2909dfd62f666\" returns successfully" Sep 13 01:36:08.479637 env[1589]: time="2025-09-13T01:36:08.479597578Z" level=info msg="StopPodSandbox for \"b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7\"" Sep 13 01:36:08.479717 env[1589]: time="2025-09-13T01:36:08.479667578Z" level=info msg="Container to stop \"26e2c3a8d1b553f2cafb412c112af8aa79bd8b682fa6c0c2b1b2909dfd62f666\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 01:36:08.485720 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7-shm.mount: Deactivated successfully. Sep 13 01:36:08.664482 env[1589]: time="2025-09-13T01:36:08.663487902Z" level=info msg="shim disconnected" id=b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7 Sep 13 01:36:08.664482 env[1589]: time="2025-09-13T01:36:08.663537382Z" level=warning msg="cleaning up after shim disconnected" id=b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7 namespace=k8s.io Sep 13 01:36:08.664482 env[1589]: time="2025-09-13T01:36:08.663548342Z" level=info msg="cleaning up dead shim" Sep 13 01:36:08.673773 env[1589]: time="2025-09-13T01:36:08.673686455Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:36:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6054 runtime=io.containerd.runc.v2\n" Sep 13 01:36:08.766896 systemd-networkd[1771]: cali93d7528bfcc: Link UP Sep 13 01:36:08.784873 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 01:36:08.784987 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali93d7528bfcc: link becomes ready Sep 13 01:36:08.785921 systemd-networkd[1771]: cali93d7528bfcc: Gained carrier Sep 13 01:36:08.795559 systemd-networkd[1771]: calia93641acd7b: Link DOWN Sep 13 01:36:08.795566 systemd-networkd[1771]: calia93641acd7b: Lost carrier Sep 13 01:36:08.804323 env[1589]: 2025-09-13 01:36:08.592 [INFO][6024] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--cb6df9659--jmvgv-eth0 calico-apiserver-cb6df9659- calico-apiserver c0bd22e9-031f-4435-b71b-54bfbd9273d5 1158 0 2025-09-13 01:36:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:cb6df9659 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510.3.8-n-9d226ffbbf calico-apiserver-cb6df9659-jmvgv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali93d7528bfcc [] [] }} ContainerID="36157e850ff9169b307a78c3ac3a8ce1c8fcf74d9cc10d546807b843677ffc1e" Namespace="calico-apiserver" Pod="calico-apiserver-cb6df9659-jmvgv" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--cb6df9659--jmvgv-" Sep 13 01:36:08.804323 env[1589]: 2025-09-13 01:36:08.592 [INFO][6024] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="36157e850ff9169b307a78c3ac3a8ce1c8fcf74d9cc10d546807b843677ffc1e" Namespace="calico-apiserver" Pod="calico-apiserver-cb6df9659-jmvgv" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--cb6df9659--jmvgv-eth0" Sep 13 01:36:08.804323 env[1589]: 2025-09-13 01:36:08.683 [INFO][6036] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="36157e850ff9169b307a78c3ac3a8ce1c8fcf74d9cc10d546807b843677ffc1e" HandleID="k8s-pod-network.36157e850ff9169b307a78c3ac3a8ce1c8fcf74d9cc10d546807b843677ffc1e" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--cb6df9659--jmvgv-eth0" Sep 13 01:36:08.804323 env[1589]: 2025-09-13 01:36:08.685 [INFO][6036] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="36157e850ff9169b307a78c3ac3a8ce1c8fcf74d9cc10d546807b843677ffc1e" HandleID="k8s-pod-network.36157e850ff9169b307a78c3ac3a8ce1c8fcf74d9cc10d546807b843677ffc1e" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--cb6df9659--jmvgv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ab4a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510.3.8-n-9d226ffbbf", "pod":"calico-apiserver-cb6df9659-jmvgv", "timestamp":"2025-09-13 01:36:08.683903089 +0000 UTC"}, Hostname:"ci-3510.3.8-n-9d226ffbbf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 01:36:08.804323 env[1589]: 2025-09-13 01:36:08.685 [INFO][6036] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:36:08.804323 env[1589]: 2025-09-13 01:36:08.685 [INFO][6036] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:36:08.804323 env[1589]: 2025-09-13 01:36:08.685 [INFO][6036] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-9d226ffbbf' Sep 13 01:36:08.804323 env[1589]: 2025-09-13 01:36:08.696 [INFO][6036] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.36157e850ff9169b307a78c3ac3a8ce1c8fcf74d9cc10d546807b843677ffc1e" host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:36:08.804323 env[1589]: 2025-09-13 01:36:08.704 [INFO][6036] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:36:08.804323 env[1589]: 2025-09-13 01:36:08.710 [INFO][6036] ipam/ipam.go 511: Trying affinity for 192.168.44.64/26 host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:36:08.804323 env[1589]: 2025-09-13 01:36:08.713 [INFO][6036] ipam/ipam.go 158: Attempting to load block cidr=192.168.44.64/26 host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:36:08.804323 env[1589]: 2025-09-13 01:36:08.716 [INFO][6036] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.44.64/26 host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:36:08.804323 env[1589]: 2025-09-13 01:36:08.716 [INFO][6036] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.44.64/26 handle="k8s-pod-network.36157e850ff9169b307a78c3ac3a8ce1c8fcf74d9cc10d546807b843677ffc1e" host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:36:08.804323 env[1589]: 2025-09-13 01:36:08.717 [INFO][6036] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.36157e850ff9169b307a78c3ac3a8ce1c8fcf74d9cc10d546807b843677ffc1e Sep 13 01:36:08.804323 env[1589]: 2025-09-13 01:36:08.725 [INFO][6036] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.44.64/26 handle="k8s-pod-network.36157e850ff9169b307a78c3ac3a8ce1c8fcf74d9cc10d546807b843677ffc1e" host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:36:08.804323 env[1589]: 2025-09-13 01:36:08.743 [INFO][6036] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.44.74/26] block=192.168.44.64/26 handle="k8s-pod-network.36157e850ff9169b307a78c3ac3a8ce1c8fcf74d9cc10d546807b843677ffc1e" host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:36:08.804323 env[1589]: 2025-09-13 01:36:08.743 [INFO][6036] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.44.74/26] handle="k8s-pod-network.36157e850ff9169b307a78c3ac3a8ce1c8fcf74d9cc10d546807b843677ffc1e" host="ci-3510.3.8-n-9d226ffbbf" Sep 13 01:36:08.804323 env[1589]: 2025-09-13 01:36:08.743 [INFO][6036] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:36:08.804323 env[1589]: 2025-09-13 01:36:08.743 [INFO][6036] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.44.74/26] IPv6=[] ContainerID="36157e850ff9169b307a78c3ac3a8ce1c8fcf74d9cc10d546807b843677ffc1e" HandleID="k8s-pod-network.36157e850ff9169b307a78c3ac3a8ce1c8fcf74d9cc10d546807b843677ffc1e" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--cb6df9659--jmvgv-eth0" Sep 13 01:36:08.805041 env[1589]: 2025-09-13 01:36:08.746 [INFO][6024] cni-plugin/k8s.go 418: Populated endpoint ContainerID="36157e850ff9169b307a78c3ac3a8ce1c8fcf74d9cc10d546807b843677ffc1e" Namespace="calico-apiserver" Pod="calico-apiserver-cb6df9659-jmvgv" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--cb6df9659--jmvgv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--cb6df9659--jmvgv-eth0", GenerateName:"calico-apiserver-cb6df9659-", Namespace:"calico-apiserver", SelfLink:"", UID:"c0bd22e9-031f-4435-b71b-54bfbd9273d5", ResourceVersion:"1158", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 1, 36, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"cb6df9659", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-9d226ffbbf", ContainerID:"", Pod:"calico-apiserver-cb6df9659-jmvgv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.44.74/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali93d7528bfcc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 01:36:08.805041 env[1589]: 2025-09-13 01:36:08.746 [INFO][6024] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.44.74/32] ContainerID="36157e850ff9169b307a78c3ac3a8ce1c8fcf74d9cc10d546807b843677ffc1e" Namespace="calico-apiserver" Pod="calico-apiserver-cb6df9659-jmvgv" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--cb6df9659--jmvgv-eth0" Sep 13 01:36:08.805041 env[1589]: 2025-09-13 01:36:08.746 [INFO][6024] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali93d7528bfcc ContainerID="36157e850ff9169b307a78c3ac3a8ce1c8fcf74d9cc10d546807b843677ffc1e" Namespace="calico-apiserver" Pod="calico-apiserver-cb6df9659-jmvgv" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--cb6df9659--jmvgv-eth0" Sep 13 01:36:08.805041 env[1589]: 2025-09-13 01:36:08.786 [INFO][6024] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="36157e850ff9169b307a78c3ac3a8ce1c8fcf74d9cc10d546807b843677ffc1e" Namespace="calico-apiserver" Pod="calico-apiserver-cb6df9659-jmvgv" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--cb6df9659--jmvgv-eth0" Sep 13 01:36:08.805041 env[1589]: 2025-09-13 01:36:08.787 [INFO][6024] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="36157e850ff9169b307a78c3ac3a8ce1c8fcf74d9cc10d546807b843677ffc1e" Namespace="calico-apiserver" Pod="calico-apiserver-cb6df9659-jmvgv" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--cb6df9659--jmvgv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--cb6df9659--jmvgv-eth0", GenerateName:"calico-apiserver-cb6df9659-", Namespace:"calico-apiserver", SelfLink:"", UID:"c0bd22e9-031f-4435-b71b-54bfbd9273d5", ResourceVersion:"1158", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 1, 36, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"cb6df9659", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-9d226ffbbf", ContainerID:"36157e850ff9169b307a78c3ac3a8ce1c8fcf74d9cc10d546807b843677ffc1e", Pod:"calico-apiserver-cb6df9659-jmvgv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.44.74/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali93d7528bfcc", MAC:"2a:bc:9b:19:75:ad", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 01:36:08.805041 env[1589]: 2025-09-13 01:36:08.802 [INFO][6024] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="36157e850ff9169b307a78c3ac3a8ce1c8fcf74d9cc10d546807b843677ffc1e" Namespace="calico-apiserver" Pod="calico-apiserver-cb6df9659-jmvgv" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--cb6df9659--jmvgv-eth0" Sep 13 01:36:08.849946 env[1589]: time="2025-09-13T01:36:08.849876344Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:36:08.850459 env[1589]: time="2025-09-13T01:36:08.850429224Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:36:08.850567 env[1589]: time="2025-09-13T01:36:08.850545984Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:36:08.850000 audit[6113]: NETFILTER_CFG table=filter:148 family=2 entries=59 op=nft_register_rule pid=6113 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 01:36:08.851052 env[1589]: time="2025-09-13T01:36:08.851003384Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/36157e850ff9169b307a78c3ac3a8ce1c8fcf74d9cc10d546807b843677ffc1e pid=6110 runtime=io.containerd.runc.v2 Sep 13 01:36:08.850000 audit[6113]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=9048 a0=3 a1=fffffd489fd0 a2=0 a3=ffff832affa8 items=0 ppid=4047 pid=6113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:36:08.850000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 01:36:08.856000 audit[6113]: NETFILTER_CFG table=filter:149 family=2 entries=4 op=nft_unregister_chain pid=6113 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 01:36:08.856000 audit[6113]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=560 a0=3 a1=fffffd489fd0 a2=0 a3=ffff832affa8 items=0 ppid=4047 pid=6113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:36:08.856000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 01:36:08.911000 audit[6135]: NETFILTER_CFG table=filter:150 family=2 entries=57 op=nft_register_chain pid=6135 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 01:36:08.911000 audit[6135]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=27796 a0=3 a1=ffffd12cbbe0 a2=0 a3=ffff99ef8fa8 items=0 ppid=4047 pid=6135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:36:08.911000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 01:36:08.937626 env[1589]: 2025-09-13 01:36:08.783 [INFO][6079] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7" Sep 13 01:36:08.937626 env[1589]: 2025-09-13 01:36:08.783 [INFO][6079] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7" iface="eth0" netns="/var/run/netns/cni-6b507453-0514-3ce0-d58a-daec06ba13c0" Sep 13 01:36:08.937626 env[1589]: 2025-09-13 01:36:08.784 [INFO][6079] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7" iface="eth0" netns="/var/run/netns/cni-6b507453-0514-3ce0-d58a-daec06ba13c0" Sep 13 01:36:08.937626 env[1589]: 2025-09-13 01:36:08.811 [INFO][6079] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7" after=27.485542ms iface="eth0" netns="/var/run/netns/cni-6b507453-0514-3ce0-d58a-daec06ba13c0" Sep 13 01:36:08.937626 env[1589]: 2025-09-13 01:36:08.811 [INFO][6079] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7" Sep 13 01:36:08.937626 env[1589]: 2025-09-13 01:36:08.811 [INFO][6079] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7" Sep 13 01:36:08.937626 env[1589]: 2025-09-13 01:36:08.867 [INFO][6099] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7" HandleID="k8s-pod-network.b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dqp2g-eth0" Sep 13 01:36:08.937626 env[1589]: 2025-09-13 01:36:08.867 [INFO][6099] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:36:08.937626 env[1589]: 2025-09-13 01:36:08.867 [INFO][6099] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:36:08.937626 env[1589]: 2025-09-13 01:36:08.928 [INFO][6099] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7" HandleID="k8s-pod-network.b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dqp2g-eth0" Sep 13 01:36:08.937626 env[1589]: 2025-09-13 01:36:08.928 [INFO][6099] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7" HandleID="k8s-pod-network.b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dqp2g-eth0" Sep 13 01:36:08.937626 env[1589]: 2025-09-13 01:36:08.932 [INFO][6099] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:36:08.937626 env[1589]: 2025-09-13 01:36:08.933 [INFO][6079] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7" Sep 13 01:36:08.937626 env[1589]: time="2025-09-13T01:36:08.936544490Z" level=info msg="TearDown network for sandbox \"b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7\" successfully" Sep 13 01:36:08.937626 env[1589]: time="2025-09-13T01:36:08.936579729Z" level=info msg="StopPodSandbox for \"b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7\" returns successfully" Sep 13 01:36:08.982784 env[1589]: time="2025-09-13T01:36:08.982727620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-cb6df9659-jmvgv,Uid:c0bd22e9-031f-4435-b71b-54bfbd9273d5,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"36157e850ff9169b307a78c3ac3a8ce1c8fcf74d9cc10d546807b843677ffc1e\"" Sep 13 01:36:08.987603 env[1589]: time="2025-09-13T01:36:08.987138938Z" level=info msg="CreateContainer within sandbox \"36157e850ff9169b307a78c3ac3a8ce1c8fcf74d9cc10d546807b843677ffc1e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 01:36:08.993000 audit[6150]: NETFILTER_CFG table=filter:151 family=2 entries=8 op=nft_register_rule pid=6150 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:36:08.993000 audit[6150]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3016 a0=3 a1=fffffb27ca10 a2=0 a3=1 items=0 ppid=2833 pid=6150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:36:08.993000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:36:08.998000 audit[6150]: NETFILTER_CFG table=nat:152 family=2 entries=40 op=nft_register_rule pid=6150 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:36:08.998000 audit[6150]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=12772 a0=3 a1=fffffb27ca10 a2=0 a3=1 items=0 ppid=2833 pid=6150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:36:08.998000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:36:09.021560 env[1589]: time="2025-09-13T01:36:09.021456476Z" level=info msg="CreateContainer within sandbox \"36157e850ff9169b307a78c3ac3a8ce1c8fcf74d9cc10d546807b843677ffc1e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e1bad45337b060c96f0a8fa10118c7d436ad47b854826b3614b78169eb068caf\"" Sep 13 01:36:09.023738 env[1589]: time="2025-09-13T01:36:09.023696875Z" level=info msg="StartContainer for \"e1bad45337b060c96f0a8fa10118c7d436ad47b854826b3614b78169eb068caf\"" Sep 13 01:36:09.069728 kubelet[2688]: I0913 01:36:09.067784 2688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7bfd1f1e-041a-4f50-ba7b-87c117566d38-calico-apiserver-certs\") pod \"7bfd1f1e-041a-4f50-ba7b-87c117566d38\" (UID: \"7bfd1f1e-041a-4f50-ba7b-87c117566d38\") " Sep 13 01:36:09.069728 kubelet[2688]: I0913 01:36:09.067845 2688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgmbd\" (UniqueName: \"kubernetes.io/projected/7bfd1f1e-041a-4f50-ba7b-87c117566d38-kube-api-access-dgmbd\") pod \"7bfd1f1e-041a-4f50-ba7b-87c117566d38\" (UID: \"7bfd1f1e-041a-4f50-ba7b-87c117566d38\") " Sep 13 01:36:09.078135 kubelet[2688]: I0913 01:36:09.078076 2688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bfd1f1e-041a-4f50-ba7b-87c117566d38-kube-api-access-dgmbd" (OuterVolumeSpecName: "kube-api-access-dgmbd") pod "7bfd1f1e-041a-4f50-ba7b-87c117566d38" (UID: "7bfd1f1e-041a-4f50-ba7b-87c117566d38"). InnerVolumeSpecName "kube-api-access-dgmbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 01:36:09.100060 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7-rootfs.mount: Deactivated successfully. Sep 13 01:36:09.100467 systemd[1]: run-netns-cni\x2d6b507453\x2d0514\x2d3ce0\x2dd58a\x2ddaec06ba13c0.mount: Deactivated successfully. Sep 13 01:36:09.100775 systemd[1]: var-lib-kubelet-pods-7bfd1f1e\x2d041a\x2d4f50\x2dba7b\x2d87c117566d38-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Sep 13 01:36:09.100946 systemd[1]: var-lib-kubelet-pods-7bfd1f1e\x2d041a\x2d4f50\x2dba7b\x2d87c117566d38-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddgmbd.mount: Deactivated successfully. Sep 13 01:36:09.104471 kubelet[2688]: I0913 01:36:09.104430 2688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bfd1f1e-041a-4f50-ba7b-87c117566d38-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "7bfd1f1e-041a-4f50-ba7b-87c117566d38" (UID: "7bfd1f1e-041a-4f50-ba7b-87c117566d38"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 01:36:09.170417 kubelet[2688]: I0913 01:36:09.169418 2688 reconciler_common.go:293] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7bfd1f1e-041a-4f50-ba7b-87c117566d38-calico-apiserver-certs\") on node \"ci-3510.3.8-n-9d226ffbbf\" DevicePath \"\"" Sep 13 01:36:09.170417 kubelet[2688]: I0913 01:36:09.169457 2688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgmbd\" (UniqueName: \"kubernetes.io/projected/7bfd1f1e-041a-4f50-ba7b-87c117566d38-kube-api-access-dgmbd\") on node \"ci-3510.3.8-n-9d226ffbbf\" DevicePath \"\"" Sep 13 01:36:09.180064 env[1589]: time="2025-09-13T01:36:09.180010897Z" level=info msg="StartContainer for \"e1bad45337b060c96f0a8fa10118c7d436ad47b854826b3614b78169eb068caf\" returns successfully" Sep 13 01:36:09.596491 kubelet[2688]: I0913 01:36:09.596456 2688 scope.go:117] "RemoveContainer" containerID="26e2c3a8d1b553f2cafb412c112af8aa79bd8b682fa6c0c2b1b2909dfd62f666" Sep 13 01:36:09.600331 env[1589]: time="2025-09-13T01:36:09.600280754Z" level=info msg="RemoveContainer for \"26e2c3a8d1b553f2cafb412c112af8aa79bd8b682fa6c0c2b1b2909dfd62f666\"" Sep 13 01:36:09.614065 env[1589]: time="2025-09-13T01:36:09.614005786Z" level=info msg="RemoveContainer for \"26e2c3a8d1b553f2cafb412c112af8aa79bd8b682fa6c0c2b1b2909dfd62f666\" returns successfully" Sep 13 01:36:09.926376 systemd-networkd[1771]: cali93d7528bfcc: Gained IPv6LL Sep 13 01:36:10.170000 audit[6193]: NETFILTER_CFG table=filter:153 family=2 entries=8 op=nft_register_rule pid=6193 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:36:10.170000 audit[6193]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3016 a0=3 a1=fffff5041e70 a2=0 a3=1 items=0 ppid=2833 pid=6193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:36:10.170000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:36:10.177000 audit[6193]: NETFILTER_CFG table=nat:154 family=2 entries=40 op=nft_register_rule pid=6193 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:36:10.177000 audit[6193]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=12772 a0=3 a1=fffff5041e70 a2=0 a3=1 items=0 ppid=2833 pid=6193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:36:10.177000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:36:10.609276 kubelet[2688]: I0913 01:36:10.609149 2688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bfd1f1e-041a-4f50-ba7b-87c117566d38" path="/var/lib/kubelet/pods/7bfd1f1e-041a-4f50-ba7b-87c117566d38/volumes" Sep 13 01:36:11.449741 kubelet[2688]: I0913 01:36:11.449672 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-cb6df9659-jmvgv" podStartSLOduration=4.449654929 podStartE2EDuration="4.449654929s" podCreationTimestamp="2025-09-13 01:36:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 01:36:09.657861718 +0000 UTC m=+71.158699950" watchObservedRunningTime="2025-09-13 01:36:11.449654929 +0000 UTC m=+72.950493161" Sep 13 01:36:11.542747 env[1589]: time="2025-09-13T01:36:11.542573592Z" level=info msg="StopContainer for \"f60a5b1fb0686ffdb01c97e8e14a74a54482bae101dd5f412d10140eaab467ac\" with timeout 30 (s)" Sep 13 01:36:11.543119 env[1589]: time="2025-09-13T01:36:11.542939472Z" level=info msg="Stop container \"f60a5b1fb0686ffdb01c97e8e14a74a54482bae101dd5f412d10140eaab467ac\" with signal terminated" Sep 13 01:36:11.700000 audit[6208]: NETFILTER_CFG table=filter:155 family=2 entries=8 op=nft_register_rule pid=6208 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:36:11.700000 audit[6208]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3016 a0=3 a1=fffff882a9c0 a2=0 a3=1 items=0 ppid=2833 pid=6208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:36:11.700000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:36:11.714000 audit[6208]: NETFILTER_CFG table=nat:156 family=2 entries=42 op=nft_register_chain pid=6208 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:36:11.714000 audit[6208]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=12900 a0=3 a1=fffff882a9c0 a2=0 a3=1 items=0 ppid=2833 pid=6208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:36:11.714000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:36:11.729980 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f60a5b1fb0686ffdb01c97e8e14a74a54482bae101dd5f412d10140eaab467ac-rootfs.mount: Deactivated successfully. Sep 13 01:36:11.732079 env[1589]: time="2025-09-13T01:36:11.732019556Z" level=info msg="shim disconnected" id=f60a5b1fb0686ffdb01c97e8e14a74a54482bae101dd5f412d10140eaab467ac Sep 13 01:36:11.732177 env[1589]: time="2025-09-13T01:36:11.732080676Z" level=warning msg="cleaning up after shim disconnected" id=f60a5b1fb0686ffdb01c97e8e14a74a54482bae101dd5f412d10140eaab467ac namespace=k8s.io Sep 13 01:36:11.732177 env[1589]: time="2025-09-13T01:36:11.732092276Z" level=info msg="cleaning up dead shim" Sep 13 01:36:11.741792 env[1589]: time="2025-09-13T01:36:11.741488190Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:36:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6215 runtime=io.containerd.runc.v2\n" Sep 13 01:36:11.786724 env[1589]: time="2025-09-13T01:36:11.786662803Z" level=info msg="StopContainer for \"f60a5b1fb0686ffdb01c97e8e14a74a54482bae101dd5f412d10140eaab467ac\" returns successfully" Sep 13 01:36:11.787394 env[1589]: time="2025-09-13T01:36:11.787370242Z" level=info msg="StopPodSandbox for \"416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864\"" Sep 13 01:36:11.787545 env[1589]: time="2025-09-13T01:36:11.787524122Z" level=info msg="Container to stop \"f60a5b1fb0686ffdb01c97e8e14a74a54482bae101dd5f412d10140eaab467ac\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 01:36:11.790680 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864-shm.mount: Deactivated successfully. Sep 13 01:36:11.840105 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864-rootfs.mount: Deactivated successfully. Sep 13 01:36:11.848534 env[1589]: time="2025-09-13T01:36:11.848478965Z" level=info msg="shim disconnected" id=416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864 Sep 13 01:36:11.848534 env[1589]: time="2025-09-13T01:36:11.848530245Z" level=warning msg="cleaning up after shim disconnected" id=416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864 namespace=k8s.io Sep 13 01:36:11.848673 env[1589]: time="2025-09-13T01:36:11.848540685Z" level=info msg="cleaning up dead shim" Sep 13 01:36:11.866512 env[1589]: time="2025-09-13T01:36:11.866465034Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:36:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6249 runtime=io.containerd.runc.v2\n" Sep 13 01:36:11.957272 systemd-networkd[1771]: cali6fc3f020bef: Link DOWN Sep 13 01:36:11.957279 systemd-networkd[1771]: cali6fc3f020bef: Lost carrier Sep 13 01:36:11.996000 audit[6288]: NETFILTER_CFG table=filter:157 family=2 entries=63 op=nft_register_rule pid=6288 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 01:36:11.996000 audit[6288]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=10252 a0=3 a1=ffffda1482b0 a2=0 a3=ffff90232fa8 items=0 ppid=4047 pid=6288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:36:11.996000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 01:36:11.997000 audit[6288]: NETFILTER_CFG table=filter:158 family=2 entries=4 op=nft_unregister_chain pid=6288 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 01:36:11.997000 audit[6288]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=560 a0=3 a1=ffffda1482b0 a2=0 a3=ffff90232fa8 items=0 ppid=4047 pid=6288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:36:11.997000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 01:36:12.072615 env[1589]: 2025-09-13 01:36:11.953 [INFO][6271] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864" Sep 13 01:36:12.072615 env[1589]: 2025-09-13 01:36:11.956 [INFO][6271] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864" iface="eth0" netns="/var/run/netns/cni-2962e812-cb75-23da-b3a4-1031ba00fdf0" Sep 13 01:36:12.072615 env[1589]: 2025-09-13 01:36:11.956 [INFO][6271] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864" iface="eth0" netns="/var/run/netns/cni-2962e812-cb75-23da-b3a4-1031ba00fdf0" Sep 13 01:36:12.072615 env[1589]: 2025-09-13 01:36:11.982 [INFO][6271] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864" after=25.777464ms iface="eth0" netns="/var/run/netns/cni-2962e812-cb75-23da-b3a4-1031ba00fdf0" Sep 13 01:36:12.072615 env[1589]: 2025-09-13 01:36:11.982 [INFO][6271] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864" Sep 13 01:36:12.072615 env[1589]: 2025-09-13 01:36:11.982 [INFO][6271] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864" Sep 13 01:36:12.072615 env[1589]: 2025-09-13 01:36:12.013 [INFO][6283] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864" HandleID="k8s-pod-network.416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dx4zf-eth0" Sep 13 01:36:12.072615 env[1589]: 2025-09-13 01:36:12.014 [INFO][6283] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:36:12.072615 env[1589]: 2025-09-13 01:36:12.014 [INFO][6283] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:36:12.072615 env[1589]: 2025-09-13 01:36:12.067 [INFO][6283] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864" HandleID="k8s-pod-network.416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dx4zf-eth0" Sep 13 01:36:12.072615 env[1589]: 2025-09-13 01:36:12.067 [INFO][6283] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864" HandleID="k8s-pod-network.416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dx4zf-eth0" Sep 13 01:36:12.072615 env[1589]: 2025-09-13 01:36:12.069 [INFO][6283] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:36:12.072615 env[1589]: 2025-09-13 01:36:12.070 [INFO][6271] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864" Sep 13 01:36:12.076325 systemd[1]: run-netns-cni\x2d2962e812\x2dcb75\x2d23da\x2db3a4\x2d1031ba00fdf0.mount: Deactivated successfully. Sep 13 01:36:12.081375 env[1589]: time="2025-09-13T01:36:12.081308982Z" level=info msg="TearDown network for sandbox \"416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864\" successfully" Sep 13 01:36:12.081375 env[1589]: time="2025-09-13T01:36:12.081355222Z" level=info msg="StopPodSandbox for \"416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864\" returns successfully" Sep 13 01:36:12.119000 audit[6293]: NETFILTER_CFG table=filter:159 family=2 entries=8 op=nft_register_rule pid=6293 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:36:12.119000 audit[6293]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3016 a0=3 a1=ffffe1c183e0 a2=0 a3=1 items=0 ppid=2833 pid=6293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:36:12.119000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:36:12.126000 audit[6293]: NETFILTER_CFG table=nat:160 family=2 entries=40 op=nft_register_rule pid=6293 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:36:12.126000 audit[6293]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=12772 a0=3 a1=ffffe1c183e0 a2=0 a3=1 items=0 ppid=2833 pid=6293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:36:12.126000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:36:12.188296 kubelet[2688]: I0913 01:36:12.188254 2688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dx8gv\" (UniqueName: \"kubernetes.io/projected/42ab5820-ae33-4608-be70-9aaefef7b587-kube-api-access-dx8gv\") pod \"42ab5820-ae33-4608-be70-9aaefef7b587\" (UID: \"42ab5820-ae33-4608-be70-9aaefef7b587\") " Sep 13 01:36:12.188755 kubelet[2688]: I0913 01:36:12.188735 2688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/42ab5820-ae33-4608-be70-9aaefef7b587-calico-apiserver-certs\") pod \"42ab5820-ae33-4608-be70-9aaefef7b587\" (UID: \"42ab5820-ae33-4608-be70-9aaefef7b587\") " Sep 13 01:36:12.194080 systemd[1]: var-lib-kubelet-pods-42ab5820\x2dae33\x2d4608\x2dbe70\x2d9aaefef7b587-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddx8gv.mount: Deactivated successfully. Sep 13 01:36:12.195558 kubelet[2688]: I0913 01:36:12.195530 2688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42ab5820-ae33-4608-be70-9aaefef7b587-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "42ab5820-ae33-4608-be70-9aaefef7b587" (UID: "42ab5820-ae33-4608-be70-9aaefef7b587"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 01:36:12.199029 kubelet[2688]: I0913 01:36:12.199003 2688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42ab5820-ae33-4608-be70-9aaefef7b587-kube-api-access-dx8gv" (OuterVolumeSpecName: "kube-api-access-dx8gv") pod "42ab5820-ae33-4608-be70-9aaefef7b587" (UID: "42ab5820-ae33-4608-be70-9aaefef7b587"). InnerVolumeSpecName "kube-api-access-dx8gv". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 01:36:12.289314 kubelet[2688]: I0913 01:36:12.289200 2688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dx8gv\" (UniqueName: \"kubernetes.io/projected/42ab5820-ae33-4608-be70-9aaefef7b587-kube-api-access-dx8gv\") on node \"ci-3510.3.8-n-9d226ffbbf\" DevicePath \"\"" Sep 13 01:36:12.289314 kubelet[2688]: I0913 01:36:12.289233 2688 reconciler_common.go:293] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/42ab5820-ae33-4608-be70-9aaefef7b587-calico-apiserver-certs\") on node \"ci-3510.3.8-n-9d226ffbbf\" DevicePath \"\"" Sep 13 01:36:12.615329 kubelet[2688]: I0913 01:36:12.614973 2688 scope.go:117] "RemoveContainer" containerID="f60a5b1fb0686ffdb01c97e8e14a74a54482bae101dd5f412d10140eaab467ac" Sep 13 01:36:12.616696 env[1589]: time="2025-09-13T01:36:12.616658457Z" level=info msg="RemoveContainer for \"f60a5b1fb0686ffdb01c97e8e14a74a54482bae101dd5f412d10140eaab467ac\"" Sep 13 01:36:12.626291 env[1589]: time="2025-09-13T01:36:12.626256451Z" level=info msg="RemoveContainer for \"f60a5b1fb0686ffdb01c97e8e14a74a54482bae101dd5f412d10140eaab467ac\" returns successfully" Sep 13 01:36:12.626890 kubelet[2688]: I0913 01:36:12.626865 2688 scope.go:117] "RemoveContainer" containerID="f60a5b1fb0686ffdb01c97e8e14a74a54482bae101dd5f412d10140eaab467ac" Sep 13 01:36:12.627265 env[1589]: time="2025-09-13T01:36:12.627111810Z" level=error msg="ContainerStatus for \"f60a5b1fb0686ffdb01c97e8e14a74a54482bae101dd5f412d10140eaab467ac\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f60a5b1fb0686ffdb01c97e8e14a74a54482bae101dd5f412d10140eaab467ac\": not found" Sep 13 01:36:12.630755 kubelet[2688]: E0913 01:36:12.629525 2688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f60a5b1fb0686ffdb01c97e8e14a74a54482bae101dd5f412d10140eaab467ac\": not found" containerID="f60a5b1fb0686ffdb01c97e8e14a74a54482bae101dd5f412d10140eaab467ac" Sep 13 01:36:12.630755 kubelet[2688]: I0913 01:36:12.629569 2688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f60a5b1fb0686ffdb01c97e8e14a74a54482bae101dd5f412d10140eaab467ac"} err="failed to get container status \"f60a5b1fb0686ffdb01c97e8e14a74a54482bae101dd5f412d10140eaab467ac\": rpc error: code = NotFound desc = an error occurred when try to find container \"f60a5b1fb0686ffdb01c97e8e14a74a54482bae101dd5f412d10140eaab467ac\": not found" Sep 13 01:36:12.729928 systemd[1]: var-lib-kubelet-pods-42ab5820\x2dae33\x2d4608\x2dbe70\x2d9aaefef7b587-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Sep 13 01:36:14.606369 kubelet[2688]: I0913 01:36:14.606330 2688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42ab5820-ae33-4608-be70-9aaefef7b587" path="/var/lib/kubelet/pods/42ab5820-ae33-4608-be70-9aaefef7b587/volumes" Sep 13 01:36:17.239951 systemd[1]: run-containerd-runc-k8s.io-b9419d5cd915862421d9c0e7b507fa6c072e0ade50153f249368ac4ec61c1e1b-runc.9rYTz3.mount: Deactivated successfully. Sep 13 01:36:29.774386 systemd[1]: run-containerd-runc-k8s.io-a4d9362a7a6aefc4e3f22bb4dd76c596e6f669706ea1901731bd770b6adbc5f1-runc.FYxaGQ.mount: Deactivated successfully. Sep 13 01:36:30.767127 systemd[1]: run-containerd-runc-k8s.io-cb27a2332b86ed9ee671eb70e17d874113a064251a6e5f2064065e67a9da954e-runc.ek0TM7.mount: Deactivated successfully. Sep 13 01:36:47.239990 systemd[1]: run-containerd-runc-k8s.io-b9419d5cd915862421d9c0e7b507fa6c072e0ade50153f249368ac4ec61c1e1b-runc.LpUBds.mount: Deactivated successfully. Sep 13 01:36:55.953099 systemd[1]: run-containerd-runc-k8s.io-cb27a2332b86ed9ee671eb70e17d874113a064251a6e5f2064065e67a9da954e-runc.1ZP5uZ.mount: Deactivated successfully. Sep 13 01:37:00.658917 systemd[1]: run-containerd-runc-k8s.io-cb27a2332b86ed9ee671eb70e17d874113a064251a6e5f2064065e67a9da954e-runc.F9Hvq0.mount: Deactivated successfully. Sep 13 01:37:01.649758 systemd[1]: run-containerd-runc-k8s.io-a4d9362a7a6aefc4e3f22bb4dd76c596e6f669706ea1901731bd770b6adbc5f1-runc.S7LwYn.mount: Deactivated successfully. Sep 13 01:37:01.904011 env[1589]: time="2025-09-13T01:37:01.903908396Z" level=info msg="StopPodSandbox for \"b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7\"" Sep 13 01:37:01.969356 env[1589]: 2025-09-13 01:37:01.936 [WARNING][6488] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dqp2g-eth0" Sep 13 01:37:01.969356 env[1589]: 2025-09-13 01:37:01.936 [INFO][6488] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7" Sep 13 01:37:01.969356 env[1589]: 2025-09-13 01:37:01.936 [INFO][6488] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7" iface="eth0" netns="" Sep 13 01:37:01.969356 env[1589]: 2025-09-13 01:37:01.936 [INFO][6488] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7" Sep 13 01:37:01.969356 env[1589]: 2025-09-13 01:37:01.936 [INFO][6488] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7" Sep 13 01:37:01.969356 env[1589]: 2025-09-13 01:37:01.957 [INFO][6495] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7" HandleID="k8s-pod-network.b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dqp2g-eth0" Sep 13 01:37:01.969356 env[1589]: 2025-09-13 01:37:01.957 [INFO][6495] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:37:01.969356 env[1589]: 2025-09-13 01:37:01.957 [INFO][6495] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:37:01.969356 env[1589]: 2025-09-13 01:37:01.965 [WARNING][6495] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7" HandleID="k8s-pod-network.b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dqp2g-eth0" Sep 13 01:37:01.969356 env[1589]: 2025-09-13 01:37:01.965 [INFO][6495] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7" HandleID="k8s-pod-network.b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dqp2g-eth0" Sep 13 01:37:01.969356 env[1589]: 2025-09-13 01:37:01.966 [INFO][6495] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:37:01.969356 env[1589]: 2025-09-13 01:37:01.968 [INFO][6488] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7" Sep 13 01:37:01.969846 env[1589]: time="2025-09-13T01:37:01.969812122Z" level=info msg="TearDown network for sandbox \"b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7\" successfully" Sep 13 01:37:01.969912 env[1589]: time="2025-09-13T01:37:01.969896682Z" level=info msg="StopPodSandbox for \"b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7\" returns successfully" Sep 13 01:37:01.970506 env[1589]: time="2025-09-13T01:37:01.970482365Z" level=info msg="RemovePodSandbox for \"b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7\"" Sep 13 01:37:01.970755 env[1589]: time="2025-09-13T01:37:01.970712326Z" level=info msg="Forcibly stopping sandbox \"b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7\"" Sep 13 01:37:02.033495 env[1589]: 2025-09-13 01:37:02.004 [WARNING][6509] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dqp2g-eth0" Sep 13 01:37:02.033495 env[1589]: 2025-09-13 01:37:02.004 [INFO][6509] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7" Sep 13 01:37:02.033495 env[1589]: 2025-09-13 01:37:02.004 [INFO][6509] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7" iface="eth0" netns="" Sep 13 01:37:02.033495 env[1589]: 2025-09-13 01:37:02.004 [INFO][6509] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7" Sep 13 01:37:02.033495 env[1589]: 2025-09-13 01:37:02.004 [INFO][6509] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7" Sep 13 01:37:02.033495 env[1589]: 2025-09-13 01:37:02.021 [INFO][6516] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7" HandleID="k8s-pod-network.b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dqp2g-eth0" Sep 13 01:37:02.033495 env[1589]: 2025-09-13 01:37:02.022 [INFO][6516] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:37:02.033495 env[1589]: 2025-09-13 01:37:02.022 [INFO][6516] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:37:02.033495 env[1589]: 2025-09-13 01:37:02.030 [WARNING][6516] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7" HandleID="k8s-pod-network.b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dqp2g-eth0" Sep 13 01:37:02.033495 env[1589]: 2025-09-13 01:37:02.030 [INFO][6516] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7" HandleID="k8s-pod-network.b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dqp2g-eth0" Sep 13 01:37:02.033495 env[1589]: 2025-09-13 01:37:02.031 [INFO][6516] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:37:02.033495 env[1589]: 2025-09-13 01:37:02.032 [INFO][6509] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7" Sep 13 01:37:02.033979 env[1589]: time="2025-09-13T01:37:02.033937196Z" level=info msg="TearDown network for sandbox \"b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7\" successfully" Sep 13 01:37:02.043969 env[1589]: time="2025-09-13T01:37:02.043928484Z" level=info msg="RemovePodSandbox \"b198dde47ace93ccce93308df211d9ec92428f141c2c975c66984f4500703fb7\" returns successfully" Sep 13 01:37:02.044618 env[1589]: time="2025-09-13T01:37:02.044594047Z" level=info msg="StopPodSandbox for \"416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864\"" Sep 13 01:37:02.110567 env[1589]: 2025-09-13 01:37:02.081 [WARNING][6530] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dx4zf-eth0" Sep 13 01:37:02.110567 env[1589]: 2025-09-13 01:37:02.081 [INFO][6530] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864" Sep 13 01:37:02.110567 env[1589]: 2025-09-13 01:37:02.081 [INFO][6530] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864" iface="eth0" netns="" Sep 13 01:37:02.110567 env[1589]: 2025-09-13 01:37:02.081 [INFO][6530] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864" Sep 13 01:37:02.110567 env[1589]: 2025-09-13 01:37:02.081 [INFO][6530] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864" Sep 13 01:37:02.110567 env[1589]: 2025-09-13 01:37:02.099 [INFO][6537] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864" HandleID="k8s-pod-network.416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dx4zf-eth0" Sep 13 01:37:02.110567 env[1589]: 2025-09-13 01:37:02.099 [INFO][6537] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:37:02.110567 env[1589]: 2025-09-13 01:37:02.099 [INFO][6537] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:37:02.110567 env[1589]: 2025-09-13 01:37:02.107 [WARNING][6537] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864" HandleID="k8s-pod-network.416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dx4zf-eth0" Sep 13 01:37:02.110567 env[1589]: 2025-09-13 01:37:02.107 [INFO][6537] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864" HandleID="k8s-pod-network.416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dx4zf-eth0" Sep 13 01:37:02.110567 env[1589]: 2025-09-13 01:37:02.108 [INFO][6537] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:37:02.110567 env[1589]: 2025-09-13 01:37:02.109 [INFO][6530] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864" Sep 13 01:37:02.111075 env[1589]: time="2025-09-13T01:37:02.111033810Z" level=info msg="TearDown network for sandbox \"416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864\" successfully" Sep 13 01:37:02.111148 env[1589]: time="2025-09-13T01:37:02.111133491Z" level=info msg="StopPodSandbox for \"416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864\" returns successfully" Sep 13 01:37:02.111711 env[1589]: time="2025-09-13T01:37:02.111687654Z" level=info msg="RemovePodSandbox for \"416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864\"" Sep 13 01:37:02.112006 env[1589]: time="2025-09-13T01:37:02.111962495Z" level=info msg="Forcibly stopping sandbox \"416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864\"" Sep 13 01:37:02.181968 env[1589]: 2025-09-13 01:37:02.150 [WARNING][6551] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864" WorkloadEndpoint="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dx4zf-eth0" Sep 13 01:37:02.181968 env[1589]: 2025-09-13 01:37:02.150 [INFO][6551] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864" Sep 13 01:37:02.181968 env[1589]: 2025-09-13 01:37:02.151 [INFO][6551] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864" iface="eth0" netns="" Sep 13 01:37:02.181968 env[1589]: 2025-09-13 01:37:02.151 [INFO][6551] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864" Sep 13 01:37:02.181968 env[1589]: 2025-09-13 01:37:02.151 [INFO][6551] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864" Sep 13 01:37:02.181968 env[1589]: 2025-09-13 01:37:02.169 [INFO][6558] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864" HandleID="k8s-pod-network.416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dx4zf-eth0" Sep 13 01:37:02.181968 env[1589]: 2025-09-13 01:37:02.170 [INFO][6558] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:37:02.181968 env[1589]: 2025-09-13 01:37:02.170 [INFO][6558] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:37:02.181968 env[1589]: 2025-09-13 01:37:02.178 [WARNING][6558] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864" HandleID="k8s-pod-network.416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dx4zf-eth0" Sep 13 01:37:02.181968 env[1589]: 2025-09-13 01:37:02.178 [INFO][6558] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864" HandleID="k8s-pod-network.416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864" Workload="ci--3510.3.8--n--9d226ffbbf-k8s-calico--apiserver--948d44db6--dx4zf-eth0" Sep 13 01:37:02.181968 env[1589]: 2025-09-13 01:37:02.179 [INFO][6558] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:37:02.181968 env[1589]: 2025-09-13 01:37:02.180 [INFO][6551] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864" Sep 13 01:37:02.182512 env[1589]: time="2025-09-13T01:37:02.182471238Z" level=info msg="TearDown network for sandbox \"416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864\" successfully" Sep 13 01:37:02.192132 env[1589]: time="2025-09-13T01:37:02.192094965Z" level=info msg="RemovePodSandbox \"416754dfde188fcfc623c001b5b831c491b0077a20866798775e149518f1d864\" returns successfully" Sep 13 01:37:17.241478 systemd[1]: run-containerd-runc-k8s.io-b9419d5cd915862421d9c0e7b507fa6c072e0ade50153f249368ac4ec61c1e1b-runc.zq0Fve.mount: Deactivated successfully. Sep 13 01:37:29.776522 systemd[1]: run-containerd-runc-k8s.io-a4d9362a7a6aefc4e3f22bb4dd76c596e6f669706ea1901731bd770b6adbc5f1-runc.wPEhxa.mount: Deactivated successfully. Sep 13 01:37:30.768310 systemd[1]: run-containerd-runc-k8s.io-cb27a2332b86ed9ee671eb70e17d874113a064251a6e5f2064065e67a9da954e-runc.ODxJwF.mount: Deactivated successfully. Sep 13 01:37:46.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.24:22-10.200.16.10:52548 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:37:46.936332 systemd[1]: Started sshd@7-10.200.20.24:22-10.200.16.10:52548.service. Sep 13 01:37:46.941540 kernel: kauditd_printk_skb: 41 callbacks suppressed Sep 13 01:37:46.941657 kernel: audit: type=1130 audit(1757727466.935:469): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.24:22-10.200.16.10:52548 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:37:47.244517 systemd[1]: run-containerd-runc-k8s.io-b9419d5cd915862421d9c0e7b507fa6c072e0ade50153f249368ac4ec61c1e1b-runc.HGp6jQ.mount: Deactivated successfully. Sep 13 01:37:47.362000 audit[6680]: USER_ACCT pid=6680 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:37:47.363562 sshd[6680]: Accepted publickey for core from 10.200.16.10 port 52548 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:37:47.365471 sshd[6680]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:37:47.363000 audit[6680]: CRED_ACQ pid=6680 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:37:47.413175 kernel: audit: type=1101 audit(1757727467.362:470): pid=6680 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:37:47.413317 kernel: audit: type=1103 audit(1757727467.363:471): pid=6680 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:37:47.428154 kernel: audit: type=1006 audit(1757727467.363:472): pid=6680 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Sep 13 01:37:47.363000 audit[6680]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcb0b3fc0 a2=3 a3=1 items=0 ppid=1 pid=6680 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:37:47.451793 kernel: audit: type=1300 audit(1757727467.363:472): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcb0b3fc0 a2=3 a3=1 items=0 ppid=1 pid=6680 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:37:47.363000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 01:37:47.454486 systemd[1]: Started session-10.scope. Sep 13 01:37:47.459878 kernel: audit: type=1327 audit(1757727467.363:472): proctitle=737368643A20636F7265205B707269765D Sep 13 01:37:47.459862 systemd-logind[1569]: New session 10 of user core. Sep 13 01:37:47.463000 audit[6680]: USER_START pid=6680 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:37:47.465000 audit[6706]: CRED_ACQ pid=6706 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:37:47.511203 kernel: audit: type=1105 audit(1757727467.463:473): pid=6680 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:37:47.511333 kernel: audit: type=1103 audit(1757727467.465:474): pid=6706 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:37:47.891067 sshd[6680]: pam_unix(sshd:session): session closed for user core Sep 13 01:37:47.890000 audit[6680]: USER_END pid=6680 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:37:47.894994 systemd[1]: sshd@7-10.200.20.24:22-10.200.16.10:52548.service: Deactivated successfully. Sep 13 01:37:47.895857 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 01:37:47.891000 audit[6680]: CRED_DISP pid=6680 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:37:47.940815 kernel: audit: type=1106 audit(1757727467.890:475): pid=6680 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:37:47.940979 kernel: audit: type=1104 audit(1757727467.891:476): pid=6680 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:37:47.894000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.24:22-10.200.16.10:52548 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:37:47.941617 systemd-logind[1569]: Session 10 logged out. Waiting for processes to exit. Sep 13 01:37:47.942528 systemd-logind[1569]: Removed session 10. Sep 13 01:37:52.958659 systemd[1]: Started sshd@8-10.200.20.24:22-10.200.16.10:47362.service. Sep 13 01:37:52.957000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.24:22-10.200.16.10:47362 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:37:52.966671 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 13 01:37:52.966806 kernel: audit: type=1130 audit(1757727472.957:478): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.24:22-10.200.16.10:47362 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:37:53.402000 audit[6716]: USER_ACCT pid=6716 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:37:53.404335 sshd[6716]: Accepted publickey for core from 10.200.16.10 port 47362 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:37:53.426000 audit[6716]: CRED_ACQ pid=6716 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:37:53.428221 sshd[6716]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:37:53.448821 kernel: audit: type=1101 audit(1757727473.402:479): pid=6716 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:37:53.448999 kernel: audit: type=1103 audit(1757727473.426:480): pid=6716 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:37:53.462390 kernel: audit: type=1006 audit(1757727473.426:481): pid=6716 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Sep 13 01:37:53.426000 audit[6716]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc4c2dd60 a2=3 a3=1 items=0 ppid=1 pid=6716 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:37:53.467421 systemd-logind[1569]: New session 11 of user core. Sep 13 01:37:53.468278 systemd[1]: Started session-11.scope. Sep 13 01:37:53.485962 kernel: audit: type=1300 audit(1757727473.426:481): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc4c2dd60 a2=3 a3=1 items=0 ppid=1 pid=6716 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:37:53.426000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 01:37:53.496793 kernel: audit: type=1327 audit(1757727473.426:481): proctitle=737368643A20636F7265205B707269765D Sep 13 01:37:53.485000 audit[6716]: USER_START pid=6716 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:37:53.527645 kernel: audit: type=1105 audit(1757727473.485:482): pid=6716 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:37:53.487000 audit[6719]: CRED_ACQ pid=6719 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:37:53.572640 kernel: audit: type=1103 audit(1757727473.487:483): pid=6719 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:37:53.840129 sshd[6716]: pam_unix(sshd:session): session closed for user core Sep 13 01:37:53.840000 audit[6716]: USER_END pid=6716 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:37:53.843292 systemd-logind[1569]: Session 11 logged out. Waiting for processes to exit. Sep 13 01:37:53.844107 systemd[1]: sshd@8-10.200.20.24:22-10.200.16.10:47362.service: Deactivated successfully. Sep 13 01:37:53.844963 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 01:37:53.846465 systemd-logind[1569]: Removed session 11. Sep 13 01:37:53.840000 audit[6716]: CRED_DISP pid=6716 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:37:53.888612 kernel: audit: type=1106 audit(1757727473.840:484): pid=6716 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:37:53.888749 kernel: audit: type=1104 audit(1757727473.840:485): pid=6716 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:37:53.840000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.24:22-10.200.16.10:47362 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:37:55.958735 systemd[1]: run-containerd-runc-k8s.io-cb27a2332b86ed9ee671eb70e17d874113a064251a6e5f2064065e67a9da954e-runc.kKR0DE.mount: Deactivated successfully. Sep 13 01:37:58.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.24:22-10.200.16.10:47370 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:37:58.907382 systemd[1]: Started sshd@9-10.200.20.24:22-10.200.16.10:47370.service. Sep 13 01:37:58.912405 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 13 01:37:58.912472 kernel: audit: type=1130 audit(1757727478.906:487): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.24:22-10.200.16.10:47370 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:37:59.322000 audit[6749]: USER_ACCT pid=6749 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:37:59.323796 sshd[6749]: Accepted publickey for core from 10.200.16.10 port 47370 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:37:59.347275 kernel: audit: type=1101 audit(1757727479.322:488): pid=6749 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:37:59.346000 audit[6749]: CRED_ACQ pid=6749 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:37:59.348601 sshd[6749]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:37:59.384557 kernel: audit: type=1103 audit(1757727479.346:489): pid=6749 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:37:59.384693 kernel: audit: type=1006 audit(1757727479.347:490): pid=6749 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Sep 13 01:37:59.347000 audit[6749]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc37af2e0 a2=3 a3=1 items=0 ppid=1 pid=6749 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:37:59.409604 kernel: audit: type=1300 audit(1757727479.347:490): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc37af2e0 a2=3 a3=1 items=0 ppid=1 pid=6749 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:37:59.347000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 01:37:59.413331 systemd[1]: Started session-12.scope. Sep 13 01:37:59.414380 systemd-logind[1569]: New session 12 of user core. Sep 13 01:37:59.420114 kernel: audit: type=1327 audit(1757727479.347:490): proctitle=737368643A20636F7265205B707269765D Sep 13 01:37:59.420195 kernel: audit: type=1105 audit(1757727479.417:491): pid=6749 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:37:59.417000 audit[6749]: USER_START pid=6749 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:37:59.419000 audit[6752]: CRED_ACQ pid=6752 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:37:59.465998 kernel: audit: type=1103 audit(1757727479.419:492): pid=6752 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:37:59.746408 sshd[6749]: pam_unix(sshd:session): session closed for user core Sep 13 01:37:59.746000 audit[6749]: USER_END pid=6749 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:37:59.751300 systemd-logind[1569]: Session 12 logged out. Waiting for processes to exit. Sep 13 01:37:59.752682 systemd[1]: sshd@9-10.200.20.24:22-10.200.16.10:47370.service: Deactivated successfully. Sep 13 01:37:59.753545 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 01:37:59.755073 systemd-logind[1569]: Removed session 12. Sep 13 01:37:59.748000 audit[6749]: CRED_DISP pid=6749 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:37:59.795198 kernel: audit: type=1106 audit(1757727479.746:493): pid=6749 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:37:59.795342 kernel: audit: type=1104 audit(1757727479.748:494): pid=6749 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:37:59.751000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.24:22-10.200.16.10:47370 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:38:00.659035 systemd[1]: run-containerd-runc-k8s.io-cb27a2332b86ed9ee671eb70e17d874113a064251a6e5f2064065e67a9da954e-runc.MFeCf4.mount: Deactivated successfully. Sep 13 01:38:01.649251 systemd[1]: run-containerd-runc-k8s.io-a4d9362a7a6aefc4e3f22bb4dd76c596e6f669706ea1901731bd770b6adbc5f1-runc.IOFNZW.mount: Deactivated successfully. Sep 13 01:38:04.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.20.24:22-10.200.16.10:35032 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:38:04.816592 systemd[1]: Started sshd@10-10.200.20.24:22-10.200.16.10:35032.service. Sep 13 01:38:04.823199 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 13 01:38:04.823374 kernel: audit: type=1130 audit(1757727484.815:496): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.20.24:22-10.200.16.10:35032 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:38:05.237000 audit[6800]: USER_ACCT pid=6800 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:05.238459 sshd[6800]: Accepted publickey for core from 10.200.16.10 port 35032 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:38:05.262212 kernel: audit: type=1101 audit(1757727485.237:497): pid=6800 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:05.263069 sshd[6800]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:38:05.261000 audit[6800]: CRED_ACQ pid=6800 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:05.268666 systemd[1]: Started session-13.scope. Sep 13 01:38:05.269672 systemd-logind[1569]: New session 13 of user core. Sep 13 01:38:05.300279 kernel: audit: type=1103 audit(1757727485.261:498): pid=6800 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:05.300405 kernel: audit: type=1006 audit(1757727485.261:499): pid=6800 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Sep 13 01:38:05.300448 kernel: audit: type=1300 audit(1757727485.261:499): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd72a9750 a2=3 a3=1 items=0 ppid=1 pid=6800 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:38:05.261000 audit[6800]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd72a9750 a2=3 a3=1 items=0 ppid=1 pid=6800 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:38:05.261000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 01:38:05.333414 kernel: audit: type=1327 audit(1757727485.261:499): proctitle=737368643A20636F7265205B707269765D Sep 13 01:38:05.274000 audit[6800]: USER_START pid=6800 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:05.359516 kernel: audit: type=1105 audit(1757727485.274:500): pid=6800 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:05.278000 audit[6803]: CRED_ACQ pid=6803 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:05.381150 kernel: audit: type=1103 audit(1757727485.278:501): pid=6803 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:05.641256 sshd[6800]: pam_unix(sshd:session): session closed for user core Sep 13 01:38:05.640000 audit[6800]: USER_END pid=6800 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:05.668020 systemd[1]: sshd@10-10.200.20.24:22-10.200.16.10:35032.service: Deactivated successfully. Sep 13 01:38:05.669208 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 01:38:05.641000 audit[6800]: CRED_DISP pid=6800 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:05.669219 systemd-logind[1569]: Session 13 logged out. Waiting for processes to exit. Sep 13 01:38:05.690512 kernel: audit: type=1106 audit(1757727485.640:502): pid=6800 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:05.690633 kernel: audit: type=1104 audit(1757727485.641:503): pid=6800 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:05.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.20.24:22-10.200.16.10:35032 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:38:05.691445 systemd-logind[1569]: Removed session 13. Sep 13 01:38:05.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.20.24:22-10.200.16.10:35048 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:38:05.707731 systemd[1]: Started sshd@11-10.200.20.24:22-10.200.16.10:35048.service. Sep 13 01:38:06.119000 audit[6816]: USER_ACCT pid=6816 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:06.121275 sshd[6816]: Accepted publickey for core from 10.200.16.10 port 35048 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:38:06.121000 audit[6816]: CRED_ACQ pid=6816 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:06.121000 audit[6816]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffee92d380 a2=3 a3=1 items=0 ppid=1 pid=6816 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:38:06.121000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 01:38:06.122930 sshd[6816]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:38:06.127576 systemd[1]: Started session-14.scope. Sep 13 01:38:06.127789 systemd-logind[1569]: New session 14 of user core. Sep 13 01:38:06.131000 audit[6816]: USER_START pid=6816 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:06.132000 audit[6819]: CRED_ACQ pid=6819 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:06.518822 sshd[6816]: pam_unix(sshd:session): session closed for user core Sep 13 01:38:06.518000 audit[6816]: USER_END pid=6816 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:06.519000 audit[6816]: CRED_DISP pid=6816 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:06.522107 systemd[1]: sshd@11-10.200.20.24:22-10.200.16.10:35048.service: Deactivated successfully. Sep 13 01:38:06.522986 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 01:38:06.521000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.20.24:22-10.200.16.10:35048 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:38:06.523343 systemd-logind[1569]: Session 14 logged out. Waiting for processes to exit. Sep 13 01:38:06.524109 systemd-logind[1569]: Removed session 14. Sep 13 01:38:06.586792 systemd[1]: Started sshd@12-10.200.20.24:22-10.200.16.10:35050.service. Sep 13 01:38:06.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.24:22-10.200.16.10:35050 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:38:07.005000 audit[6826]: USER_ACCT pid=6826 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:07.007005 sshd[6826]: Accepted publickey for core from 10.200.16.10 port 35050 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:38:07.006000 audit[6826]: CRED_ACQ pid=6826 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:07.006000 audit[6826]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdff45320 a2=3 a3=1 items=0 ppid=1 pid=6826 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:38:07.006000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 01:38:07.008641 sshd[6826]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:38:07.013316 systemd[1]: Started session-15.scope. Sep 13 01:38:07.013531 systemd-logind[1569]: New session 15 of user core. Sep 13 01:38:07.016000 audit[6826]: USER_START pid=6826 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:07.017000 audit[6829]: CRED_ACQ pid=6829 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:07.493527 sshd[6826]: pam_unix(sshd:session): session closed for user core Sep 13 01:38:07.493000 audit[6826]: USER_END pid=6826 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:07.493000 audit[6826]: CRED_DISP pid=6826 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:07.495000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.24:22-10.200.16.10:35050 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:38:07.496279 systemd[1]: sshd@12-10.200.20.24:22-10.200.16.10:35050.service: Deactivated successfully. Sep 13 01:38:07.497950 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 01:38:07.498513 systemd-logind[1569]: Session 15 logged out. Waiting for processes to exit. Sep 13 01:38:07.499871 systemd-logind[1569]: Removed session 15. Sep 13 01:38:12.586297 kernel: kauditd_printk_skb: 23 callbacks suppressed Sep 13 01:38:12.586431 kernel: audit: type=1130 audit(1757727492.560:523): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.24:22-10.200.16.10:54718 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:38:12.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.24:22-10.200.16.10:54718 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:38:12.562010 systemd[1]: Started sshd@13-10.200.20.24:22-10.200.16.10:54718.service. Sep 13 01:38:12.975000 audit[6843]: USER_ACCT pid=6843 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:12.976976 sshd[6843]: Accepted publickey for core from 10.200.16.10 port 54718 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:38:12.978797 sshd[6843]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:38:12.977000 audit[6843]: CRED_ACQ pid=6843 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:13.022541 kernel: audit: type=1101 audit(1757727492.975:524): pid=6843 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:13.022649 kernel: audit: type=1103 audit(1757727492.977:525): pid=6843 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:13.036384 kernel: audit: type=1006 audit(1757727492.977:526): pid=6843 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Sep 13 01:38:12.977000 audit[6843]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd0a8c0d0 a2=3 a3=1 items=0 ppid=1 pid=6843 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:38:13.060041 kernel: audit: type=1300 audit(1757727492.977:526): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd0a8c0d0 a2=3 a3=1 items=0 ppid=1 pid=6843 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:38:12.977000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 01:38:13.068653 kernel: audit: type=1327 audit(1757727492.977:526): proctitle=737368643A20636F7265205B707269765D Sep 13 01:38:13.075065 systemd[1]: Started session-16.scope. Sep 13 01:38:13.076299 systemd-logind[1569]: New session 16 of user core. Sep 13 01:38:13.080000 audit[6843]: USER_START pid=6843 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:13.082000 audit[6846]: CRED_ACQ pid=6846 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:13.129146 kernel: audit: type=1105 audit(1757727493.080:527): pid=6843 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:13.129272 kernel: audit: type=1103 audit(1757727493.082:528): pid=6846 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:13.413535 sshd[6843]: pam_unix(sshd:session): session closed for user core Sep 13 01:38:13.413000 audit[6843]: USER_END pid=6843 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:13.442485 kernel: audit: type=1106 audit(1757727493.413:529): pid=6843 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:13.440662 systemd[1]: sshd@13-10.200.20.24:22-10.200.16.10:54718.service: Deactivated successfully. Sep 13 01:38:13.441484 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 01:38:13.419000 audit[6843]: CRED_DISP pid=6843 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:13.464030 systemd-logind[1569]: Session 16 logged out. Waiting for processes to exit. Sep 13 01:38:13.439000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.24:22-10.200.16.10:54718 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:38:13.465022 systemd-logind[1569]: Removed session 16. Sep 13 01:38:13.465634 kernel: audit: type=1104 audit(1757727493.419:530): pid=6843 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:18.481521 systemd[1]: Started sshd@14-10.200.20.24:22-10.200.16.10:54728.service. Sep 13 01:38:18.508876 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 13 01:38:18.508975 kernel: audit: type=1130 audit(1757727498.480:532): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.24:22-10.200.16.10:54728 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:38:18.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.24:22-10.200.16.10:54728 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:38:18.894000 audit[6876]: USER_ACCT pid=6876 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:18.895584 sshd[6876]: Accepted publickey for core from 10.200.16.10 port 54728 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:38:18.897476 sshd[6876]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:38:18.895000 audit[6876]: CRED_ACQ pid=6876 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:18.922857 systemd[1]: Started session-17.scope. Sep 13 01:38:18.923962 systemd-logind[1569]: New session 17 of user core. Sep 13 01:38:18.939404 kernel: audit: type=1101 audit(1757727498.894:533): pid=6876 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:18.939511 kernel: audit: type=1103 audit(1757727498.895:534): pid=6876 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:18.952977 kernel: audit: type=1006 audit(1757727498.895:535): pid=6876 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Sep 13 01:38:18.895000 audit[6876]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc873be50 a2=3 a3=1 items=0 ppid=1 pid=6876 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:38:18.975903 kernel: audit: type=1300 audit(1757727498.895:535): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc873be50 a2=3 a3=1 items=0 ppid=1 pid=6876 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:38:18.976742 kernel: audit: type=1327 audit(1757727498.895:535): proctitle=737368643A20636F7265205B707269765D Sep 13 01:38:18.895000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 01:38:18.952000 audit[6876]: USER_START pid=6876 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:19.008669 kernel: audit: type=1105 audit(1757727498.952:536): pid=6876 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:18.953000 audit[6879]: CRED_ACQ pid=6879 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:19.028859 kernel: audit: type=1103 audit(1757727498.953:537): pid=6879 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:19.286393 sshd[6876]: pam_unix(sshd:session): session closed for user core Sep 13 01:38:19.286000 audit[6876]: USER_END pid=6876 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:19.289609 systemd-logind[1569]: Session 17 logged out. Waiting for processes to exit. Sep 13 01:38:19.290997 systemd[1]: sshd@14-10.200.20.24:22-10.200.16.10:54728.service: Deactivated successfully. Sep 13 01:38:19.291890 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 01:38:19.293652 systemd-logind[1569]: Removed session 17. Sep 13 01:38:19.286000 audit[6876]: CRED_DISP pid=6876 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:19.336833 kernel: audit: type=1106 audit(1757727499.286:538): pid=6876 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:19.336963 kernel: audit: type=1104 audit(1757727499.286:539): pid=6876 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:19.290000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.24:22-10.200.16.10:54728 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:38:19.353598 systemd[1]: Started sshd@15-10.200.20.24:22-10.200.16.10:54738.service. Sep 13 01:38:19.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.24:22-10.200.16.10:54738 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:38:19.765000 audit[6889]: USER_ACCT pid=6889 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:19.766853 sshd[6889]: Accepted publickey for core from 10.200.16.10 port 54738 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:38:19.766000 audit[6889]: CRED_ACQ pid=6889 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:19.767000 audit[6889]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc3f06460 a2=3 a3=1 items=0 ppid=1 pid=6889 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:38:19.767000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 01:38:19.768523 sshd[6889]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:38:19.773011 systemd[1]: Started session-18.scope. Sep 13 01:38:19.774248 systemd-logind[1569]: New session 18 of user core. Sep 13 01:38:19.777000 audit[6889]: USER_START pid=6889 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:19.779000 audit[6892]: CRED_ACQ pid=6892 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:20.302267 sshd[6889]: pam_unix(sshd:session): session closed for user core Sep 13 01:38:20.302000 audit[6889]: USER_END pid=6889 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:20.302000 audit[6889]: CRED_DISP pid=6889 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:20.305112 systemd-logind[1569]: Session 18 logged out. Waiting for processes to exit. Sep 13 01:38:20.305270 systemd[1]: sshd@15-10.200.20.24:22-10.200.16.10:54738.service: Deactivated successfully. Sep 13 01:38:20.306147 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 01:38:20.304000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.24:22-10.200.16.10:54738 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:38:20.309592 systemd-logind[1569]: Removed session 18. Sep 13 01:38:20.369987 systemd[1]: Started sshd@16-10.200.20.24:22-10.200.16.10:50240.service. Sep 13 01:38:20.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.20.24:22-10.200.16.10:50240 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:38:20.787000 audit[6900]: USER_ACCT pid=6900 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:20.789346 sshd[6900]: Accepted publickey for core from 10.200.16.10 port 50240 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:38:20.789000 audit[6900]: CRED_ACQ pid=6900 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:20.789000 audit[6900]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc9992d80 a2=3 a3=1 items=0 ppid=1 pid=6900 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:38:20.789000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 01:38:20.791006 sshd[6900]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:38:20.795680 systemd-logind[1569]: New session 19 of user core. Sep 13 01:38:20.796005 systemd[1]: Started session-19.scope. Sep 13 01:38:20.800000 audit[6900]: USER_START pid=6900 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:20.802000 audit[6903]: CRED_ACQ pid=6903 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:22.648000 audit[6913]: NETFILTER_CFG table=filter:161 family=2 entries=20 op=nft_register_rule pid=6913 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:38:22.648000 audit[6913]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11944 a0=3 a1=ffffd259f560 a2=0 a3=1 items=0 ppid=2833 pid=6913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:38:22.648000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:38:22.657000 audit[6913]: NETFILTER_CFG table=nat:162 family=2 entries=26 op=nft_register_rule pid=6913 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:38:22.657000 audit[6913]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8076 a0=3 a1=ffffd259f560 a2=0 a3=1 items=0 ppid=2833 pid=6913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:38:22.657000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:38:22.679000 audit[6915]: NETFILTER_CFG table=filter:163 family=2 entries=32 op=nft_register_rule pid=6915 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:38:22.679000 audit[6915]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11944 a0=3 a1=fffff7d72190 a2=0 a3=1 items=0 ppid=2833 pid=6915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:38:22.679000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:38:22.700000 audit[6915]: NETFILTER_CFG table=nat:164 family=2 entries=26 op=nft_register_rule pid=6915 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:38:22.700000 audit[6915]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8076 a0=3 a1=fffff7d72190 a2=0 a3=1 items=0 ppid=2833 pid=6915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:38:22.700000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:38:22.729900 sshd[6900]: pam_unix(sshd:session): session closed for user core Sep 13 01:38:22.729000 audit[6900]: USER_END pid=6900 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:22.730000 audit[6900]: CRED_DISP pid=6900 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:22.731000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.20.24:22-10.200.16.10:50240 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:38:22.732638 systemd[1]: sshd@16-10.200.20.24:22-10.200.16.10:50240.service: Deactivated successfully. Sep 13 01:38:22.734300 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 01:38:22.734739 systemd-logind[1569]: Session 19 logged out. Waiting for processes to exit. Sep 13 01:38:22.735917 systemd-logind[1569]: Removed session 19. Sep 13 01:38:22.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.20.24:22-10.200.16.10:50256 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:38:22.795904 systemd[1]: Started sshd@17-10.200.20.24:22-10.200.16.10:50256.service. Sep 13 01:38:23.221000 audit[6918]: USER_ACCT pid=6918 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:23.223326 sshd[6918]: Accepted publickey for core from 10.200.16.10 port 50256 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:38:23.223000 audit[6918]: CRED_ACQ pid=6918 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:23.223000 audit[6918]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd2466e90 a2=3 a3=1 items=0 ppid=1 pid=6918 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:38:23.223000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 01:38:23.225001 sshd[6918]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:38:23.229541 systemd[1]: Started session-20.scope. Sep 13 01:38:23.229851 systemd-logind[1569]: New session 20 of user core. Sep 13 01:38:23.233000 audit[6918]: USER_START pid=6918 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:23.235000 audit[6927]: CRED_ACQ pid=6927 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:23.815585 sshd[6918]: pam_unix(sshd:session): session closed for user core Sep 13 01:38:23.846220 kernel: kauditd_printk_skb: 43 callbacks suppressed Sep 13 01:38:23.846356 kernel: audit: type=1106 audit(1757727503.816:569): pid=6918 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:23.816000 audit[6918]: USER_END pid=6918 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:23.844571 systemd[1]: sshd@17-10.200.20.24:22-10.200.16.10:50256.service: Deactivated successfully. Sep 13 01:38:23.845447 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 01:38:23.847337 systemd-logind[1569]: Session 20 logged out. Waiting for processes to exit. Sep 13 01:38:23.848392 systemd-logind[1569]: Removed session 20. Sep 13 01:38:23.816000 audit[6918]: CRED_DISP pid=6918 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:23.871826 kernel: audit: type=1104 audit(1757727503.816:570): pid=6918 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:23.843000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.20.24:22-10.200.16.10:50256 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:38:23.898960 kernel: audit: type=1131 audit(1757727503.843:571): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.20.24:22-10.200.16.10:50256 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:38:23.897922 systemd[1]: Started sshd@18-10.200.20.24:22-10.200.16.10:50260.service. Sep 13 01:38:23.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.20.24:22-10.200.16.10:50260 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:38:23.926269 kernel: audit: type=1130 audit(1757727503.896:572): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.20.24:22-10.200.16.10:50260 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:38:24.319808 sshd[6935]: Accepted publickey for core from 10.200.16.10 port 50260 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:38:24.370691 kernel: audit: type=1101 audit(1757727504.318:573): pid=6935 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:24.370757 kernel: audit: type=1103 audit(1757727504.343:574): pid=6935 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:24.318000 audit[6935]: USER_ACCT pid=6935 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:24.343000 audit[6935]: CRED_ACQ pid=6935 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:24.345016 sshd[6935]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:38:24.372260 systemd[1]: Started session-21.scope. Sep 13 01:38:24.373535 systemd-logind[1569]: New session 21 of user core. Sep 13 01:38:24.389698 kernel: audit: type=1006 audit(1757727504.343:575): pid=6935 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Sep 13 01:38:24.343000 audit[6935]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffeb156c90 a2=3 a3=1 items=0 ppid=1 pid=6935 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:38:24.418926 kernel: audit: type=1300 audit(1757727504.343:575): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffeb156c90 a2=3 a3=1 items=0 ppid=1 pid=6935 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:38:24.343000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 01:38:24.430826 kernel: audit: type=1327 audit(1757727504.343:575): proctitle=737368643A20636F7265205B707269765D Sep 13 01:38:24.430921 kernel: audit: type=1105 audit(1757727504.390:576): pid=6935 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:24.390000 audit[6935]: USER_START pid=6935 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:24.391000 audit[6940]: CRED_ACQ pid=6940 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:24.751590 sshd[6935]: pam_unix(sshd:session): session closed for user core Sep 13 01:38:24.751000 audit[6935]: USER_END pid=6935 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:24.751000 audit[6935]: CRED_DISP pid=6935 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:24.754539 systemd-logind[1569]: Session 21 logged out. Waiting for processes to exit. Sep 13 01:38:24.754000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.20.24:22-10.200.16.10:50260 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:38:24.755773 systemd[1]: sshd@18-10.200.20.24:22-10.200.16.10:50260.service: Deactivated successfully. Sep 13 01:38:24.756610 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 01:38:24.758008 systemd-logind[1569]: Removed session 21. Sep 13 01:38:28.758000 audit[6951]: NETFILTER_CFG table=filter:165 family=2 entries=20 op=nft_register_rule pid=6951 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:38:28.758000 audit[6951]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3016 a0=3 a1=ffffd818e1a0 a2=0 a3=1 items=0 ppid=2833 pid=6951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:38:28.758000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:38:28.763000 audit[6951]: NETFILTER_CFG table=nat:166 family=2 entries=110 op=nft_register_chain pid=6951 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:38:28.763000 audit[6951]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=50988 a0=3 a1=ffffd818e1a0 a2=0 a3=1 items=0 ppid=2833 pid=6951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:38:28.763000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:38:29.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.24:22-10.200.16.10:55970 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:38:29.818807 systemd[1]: Started sshd@19-10.200.20.24:22-10.200.16.10:55970.service. Sep 13 01:38:29.823929 kernel: kauditd_printk_skb: 10 callbacks suppressed Sep 13 01:38:29.824049 kernel: audit: type=1130 audit(1757727509.817:583): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.24:22-10.200.16.10:55970 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:38:30.236000 audit[6972]: USER_ACCT pid=6972 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:30.237869 sshd[6972]: Accepted publickey for core from 10.200.16.10 port 55970 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:38:30.260205 kernel: audit: type=1101 audit(1757727510.236:584): pid=6972 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:30.260000 audit[6972]: CRED_ACQ pid=6972 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:30.262037 sshd[6972]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:38:30.296901 kernel: audit: type=1103 audit(1757727510.260:585): pid=6972 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:30.297094 kernel: audit: type=1006 audit(1757727510.260:586): pid=6972 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Sep 13 01:38:30.260000 audit[6972]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcd9ca760 a2=3 a3=1 items=0 ppid=1 pid=6972 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:38:30.321200 kernel: audit: type=1300 audit(1757727510.260:586): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcd9ca760 a2=3 a3=1 items=0 ppid=1 pid=6972 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:38:30.260000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 01:38:30.329957 kernel: audit: type=1327 audit(1757727510.260:586): proctitle=737368643A20636F7265205B707269765D Sep 13 01:38:30.333592 systemd[1]: Started session-22.scope. Sep 13 01:38:30.333951 systemd-logind[1569]: New session 22 of user core. Sep 13 01:38:30.338000 audit[6972]: USER_START pid=6972 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:30.340000 audit[6978]: CRED_ACQ pid=6978 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:30.388998 kernel: audit: type=1105 audit(1757727510.338:587): pid=6972 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:30.389233 kernel: audit: type=1103 audit(1757727510.340:588): pid=6978 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:30.658370 sshd[6972]: pam_unix(sshd:session): session closed for user core Sep 13 01:38:30.662000 audit[6972]: USER_END pid=6972 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:30.665857 systemd[1]: run-containerd-runc-k8s.io-cb27a2332b86ed9ee671eb70e17d874113a064251a6e5f2064065e67a9da954e-runc.j5lqUB.mount: Deactivated successfully. Sep 13 01:38:30.669156 systemd[1]: sshd@19-10.200.20.24:22-10.200.16.10:55970.service: Deactivated successfully. Sep 13 01:38:30.672131 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 01:38:30.691621 systemd-logind[1569]: Session 22 logged out. Waiting for processes to exit. Sep 13 01:38:30.662000 audit[6972]: CRED_DISP pid=6972 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:30.714461 kernel: audit: type=1106 audit(1757727510.662:589): pid=6972 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:30.714601 kernel: audit: type=1104 audit(1757727510.662:590): pid=6972 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:30.715247 systemd-logind[1569]: Removed session 22. Sep 13 01:38:30.668000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.24:22-10.200.16.10:55970 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:38:35.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.24:22-10.200.16.10:55978 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:38:35.721520 systemd[1]: Started sshd@20-10.200.20.24:22-10.200.16.10:55978.service. Sep 13 01:38:35.726793 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 13 01:38:35.726897 kernel: audit: type=1130 audit(1757727515.720:592): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.24:22-10.200.16.10:55978 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:38:36.134000 audit[7030]: USER_ACCT pid=7030 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:36.136380 sshd[7030]: Accepted publickey for core from 10.200.16.10 port 55978 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:38:36.161850 kernel: audit: type=1101 audit(1757727516.134:593): pid=7030 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:36.160000 audit[7030]: CRED_ACQ pid=7030 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:36.162534 sshd[7030]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:38:36.201639 kernel: audit: type=1103 audit(1757727516.160:594): pid=7030 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:36.201759 kernel: audit: type=1006 audit(1757727516.161:595): pid=7030 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Sep 13 01:38:36.161000 audit[7030]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe7026020 a2=3 a3=1 items=0 ppid=1 pid=7030 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:38:36.226604 kernel: audit: type=1300 audit(1757727516.161:595): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe7026020 a2=3 a3=1 items=0 ppid=1 pid=7030 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:38:36.228209 kernel: audit: type=1327 audit(1757727516.161:595): proctitle=737368643A20636F7265205B707269765D Sep 13 01:38:36.161000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 01:38:36.236855 systemd-logind[1569]: New session 23 of user core. Sep 13 01:38:36.237304 systemd[1]: Started session-23.scope. Sep 13 01:38:36.241000 audit[7030]: USER_START pid=7030 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:36.269000 audit[7033]: CRED_ACQ pid=7033 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:36.292392 kernel: audit: type=1105 audit(1757727516.241:596): pid=7030 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:36.292523 kernel: audit: type=1103 audit(1757727516.269:597): pid=7033 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:36.629612 sshd[7030]: pam_unix(sshd:session): session closed for user core Sep 13 01:38:36.629000 audit[7030]: USER_END pid=7030 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:36.638927 systemd-logind[1569]: Session 23 logged out. Waiting for processes to exit. Sep 13 01:38:36.639705 systemd[1]: sshd@20-10.200.20.24:22-10.200.16.10:55978.service: Deactivated successfully. Sep 13 01:38:36.640587 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 01:38:36.641670 systemd-logind[1569]: Removed session 23. Sep 13 01:38:36.629000 audit[7030]: CRED_DISP pid=7030 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:36.687449 kernel: audit: type=1106 audit(1757727516.629:598): pid=7030 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:36.687589 kernel: audit: type=1104 audit(1757727516.629:599): pid=7030 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:36.637000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.24:22-10.200.16.10:55978 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:38:41.698367 systemd[1]: Started sshd@21-10.200.20.24:22-10.200.16.10:38702.service. Sep 13 01:38:41.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.24:22-10.200.16.10:38702 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:38:41.705198 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 13 01:38:41.705279 kernel: audit: type=1130 audit(1757727521.698:601): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.24:22-10.200.16.10:38702 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:38:42.114000 audit[7043]: USER_ACCT pid=7043 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:42.116085 sshd[7043]: Accepted publickey for core from 10.200.16.10 port 38702 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:38:42.118097 sshd[7043]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:38:42.123129 systemd[1]: Started session-24.scope. Sep 13 01:38:42.124173 systemd-logind[1569]: New session 24 of user core. Sep 13 01:38:42.116000 audit[7043]: CRED_ACQ pid=7043 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:42.161054 kernel: audit: type=1101 audit(1757727522.114:602): pid=7043 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:42.161158 kernel: audit: type=1103 audit(1757727522.116:603): pid=7043 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:42.175569 kernel: audit: type=1006 audit(1757727522.116:604): pid=7043 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Sep 13 01:38:42.116000 audit[7043]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdffdb080 a2=3 a3=1 items=0 ppid=1 pid=7043 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:38:42.198166 kernel: audit: type=1300 audit(1757727522.116:604): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdffdb080 a2=3 a3=1 items=0 ppid=1 pid=7043 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:38:42.116000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 01:38:42.206272 kernel: audit: type=1327 audit(1757727522.116:604): proctitle=737368643A20636F7265205B707269765D Sep 13 01:38:42.206381 kernel: audit: type=1105 audit(1757727522.127:605): pid=7043 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:42.127000 audit[7043]: USER_START pid=7043 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:42.129000 audit[7045]: CRED_ACQ pid=7045 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:42.251284 kernel: audit: type=1103 audit(1757727522.129:606): pid=7045 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:42.489785 sshd[7043]: pam_unix(sshd:session): session closed for user core Sep 13 01:38:42.489000 audit[7043]: USER_END pid=7043 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:42.517232 systemd[1]: sshd@21-10.200.20.24:22-10.200.16.10:38702.service: Deactivated successfully. Sep 13 01:38:42.518035 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 01:38:42.489000 audit[7043]: CRED_DISP pid=7043 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:42.539560 kernel: audit: type=1106 audit(1757727522.489:607): pid=7043 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:42.539821 kernel: audit: type=1104 audit(1757727522.489:608): pid=7043 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:42.516000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.24:22-10.200.16.10:38702 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:38:42.540041 systemd-logind[1569]: Session 24 logged out. Waiting for processes to exit. Sep 13 01:38:42.541163 systemd-logind[1569]: Removed session 24. Sep 13 01:38:47.234059 systemd[1]: run-containerd-runc-k8s.io-b9419d5cd915862421d9c0e7b507fa6c072e0ade50153f249368ac4ec61c1e1b-runc.1yGBok.mount: Deactivated successfully. Sep 13 01:38:47.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.24:22-10.200.16.10:38714 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:38:47.570923 systemd[1]: Started sshd@22-10.200.20.24:22-10.200.16.10:38714.service. Sep 13 01:38:47.575761 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 13 01:38:47.575883 kernel: audit: type=1130 audit(1757727527.569:610): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.24:22-10.200.16.10:38714 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:38:48.008000 audit[7091]: USER_ACCT pid=7091 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:48.010021 sshd[7091]: Accepted publickey for core from 10.200.16.10 port 38714 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:38:48.033261 kernel: audit: type=1101 audit(1757727528.008:611): pid=7091 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:48.034034 sshd[7091]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:38:48.032000 audit[7091]: CRED_ACQ pid=7091 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:48.069554 kernel: audit: type=1103 audit(1757727528.032:612): pid=7091 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:48.069764 kernel: audit: type=1006 audit(1757727528.032:613): pid=7091 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Sep 13 01:38:48.032000 audit[7091]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd66c7650 a2=3 a3=1 items=0 ppid=1 pid=7091 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:38:48.092574 kernel: audit: type=1300 audit(1757727528.032:613): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd66c7650 a2=3 a3=1 items=0 ppid=1 pid=7091 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:38:48.032000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 01:38:48.102206 kernel: audit: type=1327 audit(1757727528.032:613): proctitle=737368643A20636F7265205B707269765D Sep 13 01:38:48.106332 systemd[1]: Started session-25.scope. Sep 13 01:38:48.107334 systemd-logind[1569]: New session 25 of user core. Sep 13 01:38:48.111000 audit[7091]: USER_START pid=7091 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:48.113000 audit[7094]: CRED_ACQ pid=7094 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:48.158437 kernel: audit: type=1105 audit(1757727528.111:614): pid=7091 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:48.159026 kernel: audit: type=1103 audit(1757727528.113:615): pid=7094 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:48.419220 sshd[7091]: pam_unix(sshd:session): session closed for user core Sep 13 01:38:48.419000 audit[7091]: USER_END pid=7091 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:48.423193 systemd-logind[1569]: Session 25 logged out. Waiting for processes to exit. Sep 13 01:38:48.424613 systemd[1]: sshd@22-10.200.20.24:22-10.200.16.10:38714.service: Deactivated successfully. Sep 13 01:38:48.425485 systemd[1]: session-25.scope: Deactivated successfully. Sep 13 01:38:48.427261 systemd-logind[1569]: Removed session 25. Sep 13 01:38:48.419000 audit[7091]: CRED_DISP pid=7091 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:48.468893 kernel: audit: type=1106 audit(1757727528.419:616): pid=7091 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:48.469033 kernel: audit: type=1104 audit(1757727528.419:617): pid=7091 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:48.423000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.24:22-10.200.16.10:38714 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:38:53.488649 systemd[1]: Started sshd@23-10.200.20.24:22-10.200.16.10:35628.service. Sep 13 01:38:53.514779 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 13 01:38:53.514896 kernel: audit: type=1130 audit(1757727533.487:619): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.24:22-10.200.16.10:35628 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:38:53.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.24:22-10.200.16.10:35628 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:38:53.899000 audit[7113]: USER_ACCT pid=7113 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:53.901373 sshd[7113]: Accepted publickey for core from 10.200.16.10 port 35628 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:38:53.925459 kernel: audit: type=1101 audit(1757727533.899:620): pid=7113 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:53.925911 sshd[7113]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:38:53.924000 audit[7113]: CRED_ACQ pid=7113 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:53.963286 kernel: audit: type=1103 audit(1757727533.924:621): pid=7113 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:53.963472 kernel: audit: type=1006 audit(1757727533.924:622): pid=7113 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Sep 13 01:38:53.924000 audit[7113]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff9723ef0 a2=3 a3=1 items=0 ppid=1 pid=7113 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:38:53.987629 kernel: audit: type=1300 audit(1757727533.924:622): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff9723ef0 a2=3 a3=1 items=0 ppid=1 pid=7113 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:38:53.988208 kernel: audit: type=1327 audit(1757727533.924:622): proctitle=737368643A20636F7265205B707269765D Sep 13 01:38:53.924000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 01:38:53.990925 systemd[1]: Started session-26.scope. Sep 13 01:38:53.995460 systemd-logind[1569]: New session 26 of user core. Sep 13 01:38:53.998000 audit[7113]: USER_START pid=7113 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:54.027149 kernel: audit: type=1105 audit(1757727533.998:623): pid=7113 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:54.027268 kernel: audit: type=1103 audit(1757727534.025:624): pid=7116 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:54.025000 audit[7116]: CRED_ACQ pid=7116 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:54.422375 sshd[7113]: pam_unix(sshd:session): session closed for user core Sep 13 01:38:54.423000 audit[7113]: USER_END pid=7113 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:54.426153 systemd-logind[1569]: Session 26 logged out. Waiting for processes to exit. Sep 13 01:38:54.434760 systemd[1]: sshd@23-10.200.20.24:22-10.200.16.10:35628.service: Deactivated successfully. Sep 13 01:38:54.435664 systemd[1]: session-26.scope: Deactivated successfully. Sep 13 01:38:54.437158 systemd-logind[1569]: Removed session 26. Sep 13 01:38:54.423000 audit[7113]: CRED_DISP pid=7113 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:54.471133 kernel: audit: type=1106 audit(1757727534.423:625): pid=7113 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:54.471378 kernel: audit: type=1104 audit(1757727534.423:626): pid=7113 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:54.430000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.24:22-10.200.16.10:35628 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:38:55.962880 systemd[1]: run-containerd-runc-k8s.io-cb27a2332b86ed9ee671eb70e17d874113a064251a6e5f2064065e67a9da954e-runc.KRi4Yf.mount: Deactivated successfully. Sep 13 01:38:59.489826 systemd[1]: Started sshd@24-10.200.20.24:22-10.200.16.10:35638.service. Sep 13 01:38:59.516978 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 13 01:38:59.517102 kernel: audit: type=1130 audit(1757727539.488:628): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.20.24:22-10.200.16.10:35638 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:38:59.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.20.24:22-10.200.16.10:35638 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:38:59.904000 audit[7153]: USER_ACCT pid=7153 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:59.906303 sshd[7153]: Accepted publickey for core from 10.200.16.10 port 35638 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:38:59.930213 kernel: audit: type=1101 audit(1757727539.904:629): pid=7153 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:59.929000 audit[7153]: CRED_ACQ pid=7153 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:59.930911 sshd[7153]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:38:59.967164 kernel: audit: type=1103 audit(1757727539.929:630): pid=7153 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:38:59.967265 kernel: audit: type=1006 audit(1757727539.929:631): pid=7153 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Sep 13 01:38:59.929000 audit[7153]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe94fe8d0 a2=3 a3=1 items=0 ppid=1 pid=7153 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:38:59.991428 kernel: audit: type=1300 audit(1757727539.929:631): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe94fe8d0 a2=3 a3=1 items=0 ppid=1 pid=7153 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:38:59.991558 kernel: audit: type=1327 audit(1757727539.929:631): proctitle=737368643A20636F7265205B707269765D Sep 13 01:38:59.929000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 01:39:00.003087 systemd[1]: Started session-27.scope. Sep 13 01:39:00.003693 systemd-logind[1569]: New session 27 of user core. Sep 13 01:39:00.009000 audit[7153]: USER_START pid=7153 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:39:00.009000 audit[7156]: CRED_ACQ pid=7156 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:39:00.059227 kernel: audit: type=1105 audit(1757727540.009:632): pid=7153 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:39:00.059407 kernel: audit: type=1103 audit(1757727540.009:633): pid=7156 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:39:00.325604 sshd[7153]: pam_unix(sshd:session): session closed for user core Sep 13 01:39:00.325000 audit[7153]: USER_END pid=7153 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:39:00.330648 systemd-logind[1569]: Session 27 logged out. Waiting for processes to exit. Sep 13 01:39:00.332220 systemd[1]: sshd@24-10.200.20.24:22-10.200.16.10:35638.service: Deactivated successfully. Sep 13 01:39:00.333086 systemd[1]: session-27.scope: Deactivated successfully. Sep 13 01:39:00.334583 systemd-logind[1569]: Removed session 27. Sep 13 01:39:00.325000 audit[7153]: CRED_DISP pid=7153 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:39:00.373697 kernel: audit: type=1106 audit(1757727540.325:634): pid=7153 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:39:00.373856 kernel: audit: type=1104 audit(1757727540.325:635): pid=7153 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Sep 13 01:39:00.331000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.20.24:22-10.200.16.10:35638 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:39:00.662745 systemd[1]: run-containerd-runc-k8s.io-cb27a2332b86ed9ee671eb70e17d874113a064251a6e5f2064065e67a9da954e-runc.r2e9ZP.mount: Deactivated successfully. Sep 13 01:39:01.650411 systemd[1]: run-containerd-runc-k8s.io-a4d9362a7a6aefc4e3f22bb4dd76c596e6f669706ea1901731bd770b6adbc5f1-runc.y2ubQK.mount: Deactivated successfully.