Aug 13 00:00:51.041049 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Aug 13 00:00:51.041072 kernel: Linux version 5.15.189-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Tue Aug 12 22:50:30 -00 2025 Aug 13 00:00:51.041080 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Aug 13 00:00:51.041088 kernel: printk: bootconsole [pl11] enabled Aug 13 00:00:51.041093 kernel: efi: EFI v2.70 by EDK II Aug 13 00:00:51.041099 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3761cf98 Aug 13 00:00:51.041166 kernel: random: crng init done Aug 13 00:00:51.041172 kernel: ACPI: Early table checksum verification disabled Aug 13 00:00:51.041178 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Aug 13 00:00:51.041183 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:00:51.041189 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:00:51.041194 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Aug 13 00:00:51.041203 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:00:51.041209 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:00:51.041215 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:00:51.041221 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:00:51.041227 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:00:51.041234 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:00:51.041240 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Aug 13 00:00:51.041246 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:00:51.041252 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Aug 13 00:00:51.041258 kernel: NUMA: Failed to initialise from firmware Aug 13 00:00:51.041264 kernel: NUMA: Faking a node at [mem 0x0000000000000000-0x00000001bfffffff] Aug 13 00:00:51.041270 kernel: NUMA: NODE_DATA [mem 0x1bf7f3900-0x1bf7f8fff] Aug 13 00:00:51.041276 kernel: Zone ranges: Aug 13 00:00:51.041282 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Aug 13 00:00:51.041287 kernel: DMA32 empty Aug 13 00:00:51.041293 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Aug 13 00:00:51.041300 kernel: Movable zone start for each node Aug 13 00:00:51.041306 kernel: Early memory node ranges Aug 13 00:00:51.041312 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Aug 13 00:00:51.041318 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Aug 13 00:00:51.041323 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Aug 13 00:00:51.041329 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Aug 13 00:00:51.041335 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Aug 13 00:00:51.041340 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Aug 13 00:00:51.041346 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Aug 13 00:00:51.041352 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Aug 13 00:00:51.041358 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Aug 13 00:00:51.041364 kernel: psci: probing for conduit method from ACPI. Aug 13 00:00:51.041373 kernel: psci: PSCIv1.1 detected in firmware. Aug 13 00:00:51.041379 kernel: psci: Using standard PSCI v0.2 function IDs Aug 13 00:00:51.041385 kernel: psci: MIGRATE_INFO_TYPE not supported. Aug 13 00:00:51.041391 kernel: psci: SMC Calling Convention v1.4 Aug 13 00:00:51.041397 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node -1 Aug 13 00:00:51.041405 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node -1 Aug 13 00:00:51.041411 kernel: percpu: Embedded 30 pages/cpu s82968 r8192 d31720 u122880 Aug 13 00:00:51.041417 kernel: pcpu-alloc: s82968 r8192 d31720 u122880 alloc=30*4096 Aug 13 00:00:51.041424 kernel: pcpu-alloc: [0] 0 [0] 1 Aug 13 00:00:51.041430 kernel: Detected PIPT I-cache on CPU0 Aug 13 00:00:51.041436 kernel: CPU features: detected: GIC system register CPU interface Aug 13 00:00:51.041443 kernel: CPU features: detected: Hardware dirty bit management Aug 13 00:00:51.041449 kernel: CPU features: detected: Spectre-BHB Aug 13 00:00:51.041455 kernel: CPU features: kernel page table isolation forced ON by KASLR Aug 13 00:00:51.041461 kernel: CPU features: detected: Kernel page table isolation (KPTI) Aug 13 00:00:51.041467 kernel: CPU features: detected: ARM erratum 1418040 Aug 13 00:00:51.041474 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Aug 13 00:00:51.041480 kernel: CPU features: detected: SSBS not fully self-synchronizing Aug 13 00:00:51.041486 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Aug 13 00:00:51.041492 kernel: Policy zone: Normal Aug 13 00:00:51.041500 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=32404c0887e5b8a80b0f069916a8040bfd969c7a8f47a2db1168b24bc04220cc Aug 13 00:00:51.041507 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 00:00:51.041517 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 00:00:51.041523 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 00:00:51.041529 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 00:00:51.041535 kernel: software IO TLB: mapped [mem 0x000000003a550000-0x000000003e550000] (64MB) Aug 13 00:00:51.041542 kernel: Memory: 3986880K/4194160K available (9792K kernel code, 2094K rwdata, 7588K rodata, 36416K init, 777K bss, 207280K reserved, 0K cma-reserved) Aug 13 00:00:51.041550 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 00:00:51.041556 kernel: trace event string verifier disabled Aug 13 00:00:51.041562 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 00:00:51.041569 kernel: rcu: RCU event tracing is enabled. Aug 13 00:00:51.041575 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 00:00:51.041582 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 00:00:51.041588 kernel: Tracing variant of Tasks RCU enabled. Aug 13 00:00:51.041594 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 00:00:51.041600 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 00:00:51.041607 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Aug 13 00:00:51.041613 kernel: GICv3: 960 SPIs implemented Aug 13 00:00:51.041620 kernel: GICv3: 0 Extended SPIs implemented Aug 13 00:00:51.041626 kernel: GICv3: Distributor has no Range Selector support Aug 13 00:00:51.041632 kernel: Root IRQ handler: gic_handle_irq Aug 13 00:00:51.041638 kernel: GICv3: 16 PPIs implemented Aug 13 00:00:51.041645 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Aug 13 00:00:51.041651 kernel: ITS: No ITS available, not enabling LPIs Aug 13 00:00:51.041657 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 13 00:00:51.041664 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Aug 13 00:00:51.041670 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Aug 13 00:00:51.041677 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Aug 13 00:00:51.041683 kernel: Console: colour dummy device 80x25 Aug 13 00:00:51.041691 kernel: printk: console [tty1] enabled Aug 13 00:00:51.041697 kernel: ACPI: Core revision 20210730 Aug 13 00:00:51.041704 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Aug 13 00:00:51.041710 kernel: pid_max: default: 32768 minimum: 301 Aug 13 00:00:51.041716 kernel: LSM: Security Framework initializing Aug 13 00:00:51.041723 kernel: SELinux: Initializing. Aug 13 00:00:51.041729 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:00:51.041736 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:00:51.041742 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Aug 13 00:00:51.041750 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Aug 13 00:00:51.041756 kernel: rcu: Hierarchical SRCU implementation. Aug 13 00:00:51.041763 kernel: Remapping and enabling EFI services. Aug 13 00:00:51.041769 kernel: smp: Bringing up secondary CPUs ... Aug 13 00:00:51.041776 kernel: Detected PIPT I-cache on CPU1 Aug 13 00:00:51.041782 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Aug 13 00:00:51.041788 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 13 00:00:51.041795 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Aug 13 00:00:51.041801 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 00:00:51.041807 kernel: SMP: Total of 2 processors activated. Aug 13 00:00:51.041815 kernel: CPU features: detected: 32-bit EL0 Support Aug 13 00:00:51.041822 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Aug 13 00:00:51.041828 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Aug 13 00:00:51.041835 kernel: CPU features: detected: CRC32 instructions Aug 13 00:00:51.041841 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Aug 13 00:00:51.041848 kernel: CPU features: detected: LSE atomic instructions Aug 13 00:00:51.041854 kernel: CPU features: detected: Privileged Access Never Aug 13 00:00:51.041860 kernel: CPU: All CPU(s) started at EL1 Aug 13 00:00:51.041867 kernel: alternatives: patching kernel code Aug 13 00:00:51.041875 kernel: devtmpfs: initialized Aug 13 00:00:51.041885 kernel: KASLR enabled Aug 13 00:00:51.041907 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 00:00:51.041916 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 00:00:51.042007 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 00:00:51.042014 kernel: SMBIOS 3.1.0 present. Aug 13 00:00:51.042021 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Aug 13 00:00:51.042028 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 00:00:51.042035 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Aug 13 00:00:51.042044 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Aug 13 00:00:51.042051 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Aug 13 00:00:51.042058 kernel: audit: initializing netlink subsys (disabled) Aug 13 00:00:51.042064 kernel: audit: type=2000 audit(0.090:1): state=initialized audit_enabled=0 res=1 Aug 13 00:00:51.042071 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 00:00:51.042078 kernel: cpuidle: using governor menu Aug 13 00:00:51.042084 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Aug 13 00:00:51.042092 kernel: ASID allocator initialised with 32768 entries Aug 13 00:00:51.042099 kernel: ACPI: bus type PCI registered Aug 13 00:00:51.042160 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 00:00:51.042169 kernel: Serial: AMBA PL011 UART driver Aug 13 00:00:51.042176 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 00:00:51.042183 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Aug 13 00:00:51.042190 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 00:00:51.042197 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Aug 13 00:00:51.042204 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 00:00:51.042212 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Aug 13 00:00:51.042219 kernel: ACPI: Added _OSI(Module Device) Aug 13 00:00:51.042226 kernel: ACPI: Added _OSI(Processor Device) Aug 13 00:00:51.042233 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 00:00:51.042239 kernel: ACPI: Added _OSI(Linux-Dell-Video) Aug 13 00:00:51.042292 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Aug 13 00:00:51.042334 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Aug 13 00:00:51.042342 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 00:00:51.042349 kernel: ACPI: Interpreter enabled Aug 13 00:00:51.042358 kernel: ACPI: Using GIC for interrupt routing Aug 13 00:00:51.042365 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Aug 13 00:00:51.042372 kernel: printk: console [ttyAMA0] enabled Aug 13 00:00:51.042379 kernel: printk: bootconsole [pl11] disabled Aug 13 00:00:51.042386 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Aug 13 00:00:51.042392 kernel: iommu: Default domain type: Translated Aug 13 00:00:51.042399 kernel: iommu: DMA domain TLB invalidation policy: strict mode Aug 13 00:00:51.042406 kernel: vgaarb: loaded Aug 13 00:00:51.042412 kernel: pps_core: LinuxPPS API ver. 1 registered Aug 13 00:00:51.042419 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Aug 13 00:00:51.042428 kernel: PTP clock support registered Aug 13 00:00:51.042435 kernel: Registered efivars operations Aug 13 00:00:51.042442 kernel: No ACPI PMU IRQ for CPU0 Aug 13 00:00:51.042449 kernel: No ACPI PMU IRQ for CPU1 Aug 13 00:00:51.042455 kernel: clocksource: Switched to clocksource arch_sys_counter Aug 13 00:00:51.042462 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 00:00:51.042469 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 00:00:51.042476 kernel: pnp: PnP ACPI init Aug 13 00:00:51.042482 kernel: pnp: PnP ACPI: found 0 devices Aug 13 00:00:51.042490 kernel: NET: Registered PF_INET protocol family Aug 13 00:00:51.042497 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 00:00:51.042504 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 13 00:00:51.042511 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 00:00:51.042517 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 00:00:51.042524 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Aug 13 00:00:51.042531 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 13 00:00:51.042537 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:00:51.042546 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:00:51.042553 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 00:00:51.042559 kernel: PCI: CLS 0 bytes, default 64 Aug 13 00:00:51.042566 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Aug 13 00:00:51.042573 kernel: kvm [1]: HYP mode not available Aug 13 00:00:51.042580 kernel: Initialise system trusted keyrings Aug 13 00:00:51.042586 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 13 00:00:51.042593 kernel: Key type asymmetric registered Aug 13 00:00:51.042600 kernel: Asymmetric key parser 'x509' registered Aug 13 00:00:51.042607 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Aug 13 00:00:51.042614 kernel: io scheduler mq-deadline registered Aug 13 00:00:51.042621 kernel: io scheduler kyber registered Aug 13 00:00:51.042628 kernel: io scheduler bfq registered Aug 13 00:00:51.042635 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 00:00:51.042642 kernel: thunder_xcv, ver 1.0 Aug 13 00:00:51.042649 kernel: thunder_bgx, ver 1.0 Aug 13 00:00:51.042656 kernel: nicpf, ver 1.0 Aug 13 00:00:51.042662 kernel: nicvf, ver 1.0 Aug 13 00:00:51.042844 kernel: rtc-efi rtc-efi.0: registered as rtc0 Aug 13 00:00:51.042932 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-08-13T00:00:50 UTC (1755043250) Aug 13 00:00:51.042943 kernel: efifb: probing for efifb Aug 13 00:00:51.042950 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Aug 13 00:00:51.042957 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Aug 13 00:00:51.042964 kernel: efifb: scrolling: redraw Aug 13 00:00:51.042971 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Aug 13 00:00:51.042978 kernel: Console: switching to colour frame buffer device 128x48 Aug 13 00:00:51.042986 kernel: fb0: EFI VGA frame buffer device Aug 13 00:00:51.042993 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Aug 13 00:00:51.043000 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 13 00:00:51.043007 kernel: NET: Registered PF_INET6 protocol family Aug 13 00:00:51.043014 kernel: Segment Routing with IPv6 Aug 13 00:00:51.043020 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 00:00:51.043027 kernel: NET: Registered PF_PACKET protocol family Aug 13 00:00:51.043034 kernel: Key type dns_resolver registered Aug 13 00:00:51.043040 kernel: registered taskstats version 1 Aug 13 00:00:51.043047 kernel: Loading compiled-in X.509 certificates Aug 13 00:00:51.043055 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.189-flatcar: 72b807ae6dac6ab18c2f4ab9460d3472cf28c19d' Aug 13 00:00:51.043062 kernel: Key type .fscrypt registered Aug 13 00:00:51.043068 kernel: Key type fscrypt-provisioning registered Aug 13 00:00:51.043075 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 00:00:51.043082 kernel: ima: Allocated hash algorithm: sha1 Aug 13 00:00:51.043088 kernel: ima: No architecture policies found Aug 13 00:00:51.043166 kernel: clk: Disabling unused clocks Aug 13 00:00:51.043173 kernel: Freeing unused kernel memory: 36416K Aug 13 00:00:51.043182 kernel: Run /init as init process Aug 13 00:00:51.043189 kernel: with arguments: Aug 13 00:00:51.043196 kernel: /init Aug 13 00:00:51.043202 kernel: with environment: Aug 13 00:00:51.043209 kernel: HOME=/ Aug 13 00:00:51.043215 kernel: TERM=linux Aug 13 00:00:51.043222 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 00:00:51.043231 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Aug 13 00:00:51.043242 systemd[1]: Detected virtualization microsoft. Aug 13 00:00:51.043249 systemd[1]: Detected architecture arm64. Aug 13 00:00:51.043256 systemd[1]: Running in initrd. Aug 13 00:00:51.043263 systemd[1]: No hostname configured, using default hostname. Aug 13 00:00:51.043270 systemd[1]: Hostname set to . Aug 13 00:00:51.043277 systemd[1]: Initializing machine ID from random generator. Aug 13 00:00:51.043284 systemd[1]: Queued start job for default target initrd.target. Aug 13 00:00:51.043292 systemd[1]: Started systemd-ask-password-console.path. Aug 13 00:00:51.043300 systemd[1]: Reached target cryptsetup.target. Aug 13 00:00:51.043307 systemd[1]: Reached target paths.target. Aug 13 00:00:51.043314 systemd[1]: Reached target slices.target. Aug 13 00:00:51.043321 systemd[1]: Reached target swap.target. Aug 13 00:00:51.043328 systemd[1]: Reached target timers.target. Aug 13 00:00:51.043335 systemd[1]: Listening on iscsid.socket. Aug 13 00:00:51.043342 systemd[1]: Listening on iscsiuio.socket. Aug 13 00:00:51.043349 systemd[1]: Listening on systemd-journald-audit.socket. Aug 13 00:00:51.043358 systemd[1]: Listening on systemd-journald-dev-log.socket. Aug 13 00:00:51.043365 systemd[1]: Listening on systemd-journald.socket. Aug 13 00:00:51.043372 systemd[1]: Listening on systemd-networkd.socket. Aug 13 00:00:51.043379 systemd[1]: Listening on systemd-udevd-control.socket. Aug 13 00:00:51.043386 systemd[1]: Listening on systemd-udevd-kernel.socket. Aug 13 00:00:51.043393 systemd[1]: Reached target sockets.target. Aug 13 00:00:51.043400 systemd[1]: Starting kmod-static-nodes.service... Aug 13 00:00:51.043407 systemd[1]: Finished network-cleanup.service. Aug 13 00:00:51.043414 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 00:00:51.043423 systemd[1]: Starting systemd-journald.service... Aug 13 00:00:51.043430 systemd[1]: Starting systemd-modules-load.service... Aug 13 00:00:51.043437 systemd[1]: Starting systemd-resolved.service... Aug 13 00:00:51.043444 systemd[1]: Starting systemd-vconsole-setup.service... Aug 13 00:00:51.043455 systemd-journald[276]: Journal started Aug 13 00:00:51.043499 systemd-journald[276]: Runtime Journal (/run/log/journal/c3196f19e4f14fba8b4c12dff86d9235) is 8.0M, max 78.5M, 70.5M free. Aug 13 00:00:51.021322 systemd-modules-load[277]: Inserted module 'overlay' Aug 13 00:00:51.069356 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 00:00:51.078197 systemd-modules-load[277]: Inserted module 'br_netfilter' Aug 13 00:00:51.088506 kernel: Bridge firewalling registered Aug 13 00:00:51.088534 systemd[1]: Started systemd-journald.service. Aug 13 00:00:51.085793 systemd-resolved[278]: Positive Trust Anchors: Aug 13 00:00:51.085801 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:00:51.085830 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Aug 13 00:00:51.186961 kernel: SCSI subsystem initialized Aug 13 00:00:51.186988 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 00:00:51.186999 kernel: audit: type=1130 audit(1755043251.125:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:51.187016 kernel: device-mapper: uevent: version 1.0.3 Aug 13 00:00:51.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:51.088021 systemd-resolved[278]: Defaulting to hostname 'linux'. Aug 13 00:00:51.205995 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Aug 13 00:00:51.159715 systemd[1]: Started systemd-resolved.service. Aug 13 00:00:51.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:51.205173 systemd-modules-load[277]: Inserted module 'dm_multipath' Aug 13 00:00:51.257983 kernel: audit: type=1130 audit(1755043251.210:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:51.258018 kernel: audit: type=1130 audit(1755043251.233:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:51.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:51.210907 systemd[1]: Finished kmod-static-nodes.service. Aug 13 00:00:51.234348 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 00:00:51.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:51.258841 systemd[1]: Finished systemd-modules-load.service. Aug 13 00:00:51.313634 kernel: audit: type=1130 audit(1755043251.257:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:51.313658 kernel: audit: type=1130 audit(1755043251.283:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:51.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:51.283634 systemd[1]: Finished systemd-vconsole-setup.service. Aug 13 00:00:51.338815 kernel: audit: type=1130 audit(1755043251.292:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:51.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:51.307531 systemd[1]: Reached target nss-lookup.target. Aug 13 00:00:51.335379 systemd[1]: Starting dracut-cmdline-ask.service... Aug 13 00:00:51.344324 systemd[1]: Starting systemd-sysctl.service... Aug 13 00:00:51.362480 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Aug 13 00:00:51.368205 systemd[1]: Finished dracut-cmdline-ask.service. Aug 13 00:00:51.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:51.386621 systemd[1]: Finished systemd-sysctl.service. Aug 13 00:00:51.411786 kernel: audit: type=1130 audit(1755043251.384:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:51.407737 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Aug 13 00:00:51.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:51.434428 systemd[1]: Starting dracut-cmdline.service... Aug 13 00:00:51.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:51.462215 dracut-cmdline[298]: dracut-dracut-053 Aug 13 00:00:51.467291 kernel: audit: type=1130 audit(1755043251.407:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:51.467313 kernel: audit: type=1130 audit(1755043251.433:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:51.467837 dracut-cmdline[298]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=32404c0887e5b8a80b0f069916a8040bfd969c7a8f47a2db1168b24bc04220cc Aug 13 00:00:51.558914 kernel: Loading iSCSI transport class v2.0-870. Aug 13 00:00:51.573911 kernel: iscsi: registered transport (tcp) Aug 13 00:00:51.594897 kernel: iscsi: registered transport (qla4xxx) Aug 13 00:00:51.594951 kernel: QLogic iSCSI HBA Driver Aug 13 00:00:51.630227 systemd[1]: Finished dracut-cmdline.service. Aug 13 00:00:51.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:51.635864 systemd[1]: Starting dracut-pre-udev.service... Aug 13 00:00:51.692911 kernel: raid6: neonx8 gen() 13820 MB/s Aug 13 00:00:51.710906 kernel: raid6: neonx8 xor() 10835 MB/s Aug 13 00:00:51.730904 kernel: raid6: neonx4 gen() 13520 MB/s Aug 13 00:00:51.751905 kernel: raid6: neonx4 xor() 10886 MB/s Aug 13 00:00:51.771902 kernel: raid6: neonx2 gen() 12955 MB/s Aug 13 00:00:51.791919 kernel: raid6: neonx2 xor() 10633 MB/s Aug 13 00:00:51.812925 kernel: raid6: neonx1 gen() 10523 MB/s Aug 13 00:00:51.832924 kernel: raid6: neonx1 xor() 8822 MB/s Aug 13 00:00:51.853939 kernel: raid6: int64x8 gen() 6275 MB/s Aug 13 00:00:51.875909 kernel: raid6: int64x8 xor() 3541 MB/s Aug 13 00:00:51.895903 kernel: raid6: int64x4 gen() 7207 MB/s Aug 13 00:00:51.915903 kernel: raid6: int64x4 xor() 3859 MB/s Aug 13 00:00:51.936904 kernel: raid6: int64x2 gen() 6153 MB/s Aug 13 00:00:51.957905 kernel: raid6: int64x2 xor() 3324 MB/s Aug 13 00:00:51.978930 kernel: raid6: int64x1 gen() 5043 MB/s Aug 13 00:00:52.003562 kernel: raid6: int64x1 xor() 2646 MB/s Aug 13 00:00:52.003587 kernel: raid6: using algorithm neonx8 gen() 13820 MB/s Aug 13 00:00:52.003596 kernel: raid6: .... xor() 10835 MB/s, rmw enabled Aug 13 00:00:52.007838 kernel: raid6: using neon recovery algorithm Aug 13 00:00:52.024907 kernel: xor: measuring software checksum speed Aug 13 00:00:52.033606 kernel: 8regs : 16162 MB/sec Aug 13 00:00:52.033618 kernel: 32regs : 20697 MB/sec Aug 13 00:00:52.037420 kernel: arm64_neon : 27832 MB/sec Aug 13 00:00:52.037429 kernel: xor: using function: arm64_neon (27832 MB/sec) Aug 13 00:00:52.098917 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Aug 13 00:00:52.110052 systemd[1]: Finished dracut-pre-udev.service. Aug 13 00:00:52.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:52.118000 audit: BPF prog-id=7 op=LOAD Aug 13 00:00:52.118000 audit: BPF prog-id=8 op=LOAD Aug 13 00:00:52.119510 systemd[1]: Starting systemd-udevd.service... Aug 13 00:00:52.138556 systemd-udevd[474]: Using default interface naming scheme 'v252'. Aug 13 00:00:52.141848 systemd[1]: Started systemd-udevd.service. Aug 13 00:00:52.148000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:52.152238 systemd[1]: Starting dracut-pre-trigger.service... Aug 13 00:00:52.177396 dracut-pre-trigger[485]: rd.md=0: removing MD RAID activation Aug 13 00:00:52.211445 systemd[1]: Finished dracut-pre-trigger.service. Aug 13 00:00:52.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:52.217437 systemd[1]: Starting systemd-udev-trigger.service... Aug 13 00:00:52.254570 systemd[1]: Finished systemd-udev-trigger.service. Aug 13 00:00:52.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:52.319062 kernel: hv_vmbus: Vmbus version:5.3 Aug 13 00:00:52.325910 kernel: hv_vmbus: registering driver hid_hyperv Aug 13 00:00:52.348081 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Aug 13 00:00:52.348135 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Aug 13 00:00:52.354328 kernel: hv_vmbus: registering driver hv_netvsc Aug 13 00:00:52.354377 kernel: hv_vmbus: registering driver hyperv_keyboard Aug 13 00:00:52.364317 kernel: hv_vmbus: registering driver hv_storvsc Aug 13 00:00:52.377290 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Aug 13 00:00:52.381152 kernel: scsi host1: storvsc_host_t Aug 13 00:00:52.381224 kernel: scsi host0: storvsc_host_t Aug 13 00:00:52.392748 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Aug 13 00:00:52.400924 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Aug 13 00:00:52.420219 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Aug 13 00:00:52.443953 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Aug 13 00:00:52.443978 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Aug 13 00:00:52.455193 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Aug 13 00:00:52.455297 kernel: sd 0:0:0:0: [sda] Write Protect is off Aug 13 00:00:52.455375 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Aug 13 00:00:52.455451 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Aug 13 00:00:52.455526 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Aug 13 00:00:52.455610 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:00:52.455628 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Aug 13 00:00:52.506285 kernel: hv_netvsc 0022487e-70c0-0022-487e-70c00022487e eth0: VF slot 1 added Aug 13 00:00:52.517923 kernel: hv_vmbus: registering driver hv_pci Aug 13 00:00:52.683574 kernel: hv_pci 00dcbecf-6d6b-4a03-9c16-63be5b362b29: PCI VMBus probing: Using version 0x10004 Aug 13 00:00:52.760085 kernel: hv_pci 00dcbecf-6d6b-4a03-9c16-63be5b362b29: PCI host bridge to bus 6d6b:00 Aug 13 00:00:52.760194 kernel: pci_bus 6d6b:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Aug 13 00:00:52.760300 kernel: pci_bus 6d6b:00: No busn resource found for root bus, will use [bus 00-ff] Aug 13 00:00:52.760370 kernel: pci 6d6b:00:02.0: [15b3:1018] type 00 class 0x020000 Aug 13 00:00:52.760459 kernel: pci 6d6b:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Aug 13 00:00:52.760535 kernel: pci 6d6b:00:02.0: enabling Extended Tags Aug 13 00:00:52.760609 kernel: pci 6d6b:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 6d6b:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Aug 13 00:00:52.760685 kernel: pci_bus 6d6b:00: busn_res: [bus 00-ff] end is updated to 00 Aug 13 00:00:52.760753 kernel: pci 6d6b:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Aug 13 00:00:52.797050 kernel: mlx5_core 6d6b:00:02.0: enabling device (0000 -> 0002) Aug 13 00:00:53.125496 kernel: mlx5_core 6d6b:00:02.0: firmware version: 16.31.2424 Aug 13 00:00:53.125627 kernel: mlx5_core 6d6b:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0) Aug 13 00:00:53.125713 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (528) Aug 13 00:00:53.125723 kernel: hv_netvsc 0022487e-70c0-0022-487e-70c00022487e eth0: VF registering: eth1 Aug 13 00:00:53.125807 kernel: mlx5_core 6d6b:00:02.0 eth1: joined to eth0 Aug 13 00:00:53.044095 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Aug 13 00:00:53.073963 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Aug 13 00:00:53.143914 kernel: mlx5_core 6d6b:00:02.0 enP28011s1: renamed from eth1 Aug 13 00:00:53.247419 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Aug 13 00:00:53.261869 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Aug 13 00:00:53.268119 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Aug 13 00:00:53.280821 systemd[1]: Starting disk-uuid.service... Aug 13 00:00:53.307918 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:00:53.317916 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:00:54.328839 disk-uuid[603]: The operation has completed successfully. Aug 13 00:00:54.334166 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:00:54.399446 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 00:00:54.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:54.403000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:54.399541 systemd[1]: Finished disk-uuid.service. Aug 13 00:00:54.409196 systemd[1]: Starting verity-setup.service... Aug 13 00:00:54.454051 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Aug 13 00:00:54.756838 systemd[1]: Found device dev-mapper-usr.device. Aug 13 00:00:54.762338 systemd[1]: Finished verity-setup.service. Aug 13 00:00:54.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:54.774555 systemd[1]: Mounting sysusr-usr.mount... Aug 13 00:00:54.841920 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Aug 13 00:00:54.842750 systemd[1]: Mounted sysusr-usr.mount. Aug 13 00:00:54.847250 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Aug 13 00:00:54.848080 systemd[1]: Starting ignition-setup.service... Aug 13 00:00:54.856789 systemd[1]: Starting parse-ip-for-networkd.service... Aug 13 00:00:54.904375 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:00:54.904430 kernel: BTRFS info (device sda6): using free space tree Aug 13 00:00:54.909273 kernel: BTRFS info (device sda6): has skinny extents Aug 13 00:00:54.962563 systemd[1]: Finished parse-ip-for-networkd.service. Aug 13 00:00:54.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:54.972000 audit: BPF prog-id=9 op=LOAD Aug 13 00:00:54.972734 systemd[1]: Starting systemd-networkd.service... Aug 13 00:00:54.999233 systemd-networkd[867]: lo: Link UP Aug 13 00:00:54.999245 systemd-networkd[867]: lo: Gained carrier Aug 13 00:00:55.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:54.999658 systemd-networkd[867]: Enumeration completed Aug 13 00:00:55.046633 kernel: kauditd_printk_skb: 12 callbacks suppressed Aug 13 00:00:55.046655 kernel: audit: type=1130 audit(1755043255.007:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:55.002778 systemd[1]: Started systemd-networkd.service. Aug 13 00:00:55.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:55.003129 systemd-networkd[867]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:00:55.016410 systemd[1]: Reached target network.target. Aug 13 00:00:55.085173 kernel: audit: type=1130 audit(1755043255.050:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:55.038404 systemd[1]: Starting iscsiuio.service... Aug 13 00:00:55.089544 iscsid[874]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Aug 13 00:00:55.089544 iscsid[874]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Aug 13 00:00:55.089544 iscsid[874]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Aug 13 00:00:55.089544 iscsid[874]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Aug 13 00:00:55.089544 iscsid[874]: If using hardware iscsi like qla4xxx this message can be ignored. Aug 13 00:00:55.089544 iscsid[874]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Aug 13 00:00:55.089544 iscsid[874]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Aug 13 00:00:55.223076 kernel: audit: type=1130 audit(1755043255.127:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:55.223101 kernel: mlx5_core 6d6b:00:02.0 enP28011s1: Link up Aug 13 00:00:55.223259 kernel: audit: type=1130 audit(1755043255.187:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:55.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:55.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:55.045125 systemd[1]: Started iscsiuio.service. Aug 13 00:00:55.080365 systemd[1]: Starting iscsid.service... Aug 13 00:00:55.097807 systemd[1]: Started iscsid.service. Aug 13 00:00:55.142306 systemd[1]: Starting dracut-initqueue.service... Aug 13 00:00:55.167019 systemd[1]: Finished dracut-initqueue.service. Aug 13 00:00:55.191980 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 00:00:55.192300 systemd[1]: Reached target remote-fs-pre.target. Aug 13 00:00:55.296193 kernel: audit: type=1130 audit(1755043255.266:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:55.296220 kernel: hv_netvsc 0022487e-70c0-0022-487e-70c00022487e eth0: Data path switched to VF: enP28011s1 Aug 13 00:00:55.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:55.216009 systemd[1]: Reached target remote-cryptsetup.target. Aug 13 00:00:55.306088 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Aug 13 00:00:55.227995 systemd[1]: Reached target remote-fs.target. Aug 13 00:00:55.243483 systemd[1]: Starting dracut-pre-mount.service... Aug 13 00:00:55.258109 systemd[1]: Finished dracut-pre-mount.service. Aug 13 00:00:55.310548 systemd-networkd[867]: enP28011s1: Link UP Aug 13 00:00:55.310628 systemd-networkd[867]: eth0: Link UP Aug 13 00:00:55.310753 systemd-networkd[867]: eth0: Gained carrier Aug 13 00:00:55.327427 systemd-networkd[867]: enP28011s1: Gained carrier Aug 13 00:00:55.340581 systemd[1]: Finished ignition-setup.service. Aug 13 00:00:55.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:55.345133 systemd-networkd[867]: eth0: DHCPv4 address 10.200.20.35/24, gateway 10.200.20.1 acquired from 168.63.129.16 Aug 13 00:00:55.374597 kernel: audit: type=1130 audit(1755043255.345:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:55.365073 systemd[1]: Starting ignition-fetch-offline.service... Aug 13 00:00:56.523004 systemd-networkd[867]: eth0: Gained IPv6LL Aug 13 00:00:58.346988 ignition[895]: Ignition 2.14.0 Aug 13 00:00:58.350424 ignition[895]: Stage: fetch-offline Aug 13 00:00:58.350548 ignition[895]: reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:00:58.350578 ignition[895]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Aug 13 00:00:58.424527 ignition[895]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:00:58.424719 ignition[895]: parsed url from cmdline: "" Aug 13 00:00:58.424723 ignition[895]: no config URL provided Aug 13 00:00:58.424728 ignition[895]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:00:58.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:58.431805 systemd[1]: Finished ignition-fetch-offline.service. Aug 13 00:00:58.470167 kernel: audit: type=1130 audit(1755043258.440:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:58.424737 ignition[895]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:00:58.464286 systemd[1]: Starting ignition-fetch.service... Aug 13 00:00:58.424743 ignition[895]: failed to fetch config: resource requires networking Aug 13 00:00:58.425030 ignition[895]: Ignition finished successfully Aug 13 00:00:58.475411 ignition[901]: Ignition 2.14.0 Aug 13 00:00:58.475417 ignition[901]: Stage: fetch Aug 13 00:00:58.475532 ignition[901]: reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:00:58.475560 ignition[901]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Aug 13 00:00:58.478546 ignition[901]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:00:58.478932 ignition[901]: parsed url from cmdline: "" Aug 13 00:00:58.478937 ignition[901]: no config URL provided Aug 13 00:00:58.478944 ignition[901]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:00:58.478956 ignition[901]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:00:58.478988 ignition[901]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Aug 13 00:00:58.594787 ignition[901]: GET result: OK Aug 13 00:00:58.594867 ignition[901]: config has been read from IMDS userdata Aug 13 00:00:58.598242 unknown[901]: fetched base config from "system" Aug 13 00:00:58.594935 ignition[901]: parsing config with SHA512: ea3151dc48a55d415bc6b69f04bb2333ca8bb519cf0b3547a379a0a9e11b3c3976434b871feea115d9a1c0f57615eb2e51e10890c80dbaa0f7ca48dcff6f0ac0 Aug 13 00:00:58.598250 unknown[901]: fetched base config from "system" Aug 13 00:00:58.636973 kernel: audit: type=1130 audit(1755043258.612:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:58.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:58.598910 ignition[901]: fetch: fetch complete Aug 13 00:00:58.598265 unknown[901]: fetched user config from "azure" Aug 13 00:00:58.598915 ignition[901]: fetch: fetch passed Aug 13 00:00:58.604164 systemd[1]: Finished ignition-fetch.service. Aug 13 00:00:58.598966 ignition[901]: Ignition finished successfully Aug 13 00:00:58.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:58.613643 systemd[1]: Starting ignition-kargs.service... Aug 13 00:00:58.643532 ignition[907]: Ignition 2.14.0 Aug 13 00:00:58.687280 kernel: audit: type=1130 audit(1755043258.658:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:58.654072 systemd[1]: Finished ignition-kargs.service. Aug 13 00:00:58.643538 ignition[907]: Stage: kargs Aug 13 00:00:58.683155 systemd[1]: Starting ignition-disks.service... Aug 13 00:00:58.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:58.643645 ignition[907]: reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:00:58.745885 kernel: audit: type=1130 audit(1755043258.711:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:58.706965 systemd[1]: Finished ignition-disks.service. Aug 13 00:00:58.643663 ignition[907]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Aug 13 00:00:58.731949 systemd[1]: Reached target initrd-root-device.target. Aug 13 00:00:58.646525 ignition[907]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:00:58.739125 systemd[1]: Reached target local-fs-pre.target. Aug 13 00:00:58.650012 ignition[907]: kargs: kargs passed Aug 13 00:00:58.750480 systemd[1]: Reached target local-fs.target. Aug 13 00:00:58.650155 ignition[907]: Ignition finished successfully Aug 13 00:00:58.759078 systemd[1]: Reached target sysinit.target. Aug 13 00:00:58.694812 ignition[913]: Ignition 2.14.0 Aug 13 00:00:58.767833 systemd[1]: Reached target basic.target. Aug 13 00:00:58.694818 ignition[913]: Stage: disks Aug 13 00:00:58.778675 systemd[1]: Starting systemd-fsck-root.service... Aug 13 00:00:58.694943 ignition[913]: reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:00:58.694963 ignition[913]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Aug 13 00:00:58.702241 ignition[913]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:00:58.704244 ignition[913]: disks: disks passed Aug 13 00:00:58.704295 ignition[913]: Ignition finished successfully Aug 13 00:00:58.841150 systemd-fsck[921]: ROOT: clean, 629/7326000 files, 481082/7359488 blocks Aug 13 00:00:58.848158 systemd[1]: Finished systemd-fsck-root.service. Aug 13 00:00:58.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:58.856569 systemd[1]: Mounting sysroot.mount... Aug 13 00:00:58.890930 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Aug 13 00:00:58.891726 systemd[1]: Mounted sysroot.mount. Aug 13 00:00:58.895670 systemd[1]: Reached target initrd-root-fs.target. Aug 13 00:00:58.930109 systemd[1]: Mounting sysroot-usr.mount... Aug 13 00:00:58.935185 systemd[1]: Starting flatcar-metadata-hostname.service... Aug 13 00:00:58.943041 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 00:00:58.943073 systemd[1]: Reached target ignition-diskful.target. Aug 13 00:00:58.949196 systemd[1]: Mounted sysroot-usr.mount. Aug 13 00:00:59.009816 systemd[1]: Mounting sysroot-usr-share-oem.mount... Aug 13 00:00:59.015481 systemd[1]: Starting initrd-setup-root.service... Aug 13 00:00:59.039915 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (932) Aug 13 00:00:59.052045 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:00:59.052094 kernel: BTRFS info (device sda6): using free space tree Aug 13 00:00:59.052104 initrd-setup-root[937]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 00:00:59.063666 kernel: BTRFS info (device sda6): has skinny extents Aug 13 00:00:59.070664 systemd[1]: Mounted sysroot-usr-share-oem.mount. Aug 13 00:00:59.093772 initrd-setup-root[963]: cut: /sysroot/etc/group: No such file or directory Aug 13 00:00:59.115843 initrd-setup-root[971]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 00:00:59.138941 initrd-setup-root[979]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 00:00:59.740253 systemd[1]: Finished initrd-setup-root.service. Aug 13 00:00:59.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:59.746220 systemd[1]: Starting ignition-mount.service... Aug 13 00:00:59.754799 systemd[1]: Starting sysroot-boot.service... Aug 13 00:00:59.771423 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Aug 13 00:00:59.771539 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Aug 13 00:00:59.797295 systemd[1]: Finished sysroot-boot.service. Aug 13 00:00:59.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:59.807877 ignition[1001]: INFO : Ignition 2.14.0 Aug 13 00:00:59.807877 ignition[1001]: INFO : Stage: mount Aug 13 00:00:59.807877 ignition[1001]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:00:59.807877 ignition[1001]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Aug 13 00:00:59.807877 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:00:59.807877 ignition[1001]: INFO : mount: mount passed Aug 13 00:00:59.807877 ignition[1001]: INFO : Ignition finished successfully Aug 13 00:00:59.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:59.812072 systemd[1]: Finished ignition-mount.service. Aug 13 00:01:00.160518 coreos-metadata[931]: Aug 13 00:01:00.160 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Aug 13 00:01:00.170788 coreos-metadata[931]: Aug 13 00:01:00.170 INFO Fetch successful Aug 13 00:01:00.204950 coreos-metadata[931]: Aug 13 00:01:00.204 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Aug 13 00:01:00.229906 coreos-metadata[931]: Aug 13 00:01:00.229 INFO Fetch successful Aug 13 00:01:00.245520 coreos-metadata[931]: Aug 13 00:01:00.245 INFO wrote hostname ci-3510.3.8-a-dd293077f6 to /sysroot/etc/hostname Aug 13 00:01:00.255082 systemd[1]: Finished flatcar-metadata-hostname.service. Aug 13 00:01:00.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:00.275495 kernel: kauditd_printk_skb: 4 callbacks suppressed Aug 13 00:01:00.275544 kernel: audit: type=1130 audit(1755043260.260:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:00.270589 systemd[1]: Starting ignition-files.service... Aug 13 00:01:00.295985 systemd[1]: Mounting sysroot-usr-share-oem.mount... Aug 13 00:01:00.320911 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1010) Aug 13 00:01:00.333630 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:01:00.333644 kernel: BTRFS info (device sda6): using free space tree Aug 13 00:01:00.333653 kernel: BTRFS info (device sda6): has skinny extents Aug 13 00:01:00.346278 systemd[1]: Mounted sysroot-usr-share-oem.mount. Aug 13 00:01:00.362269 ignition[1029]: INFO : Ignition 2.14.0 Aug 13 00:01:00.362269 ignition[1029]: INFO : Stage: files Aug 13 00:01:00.374088 ignition[1029]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:01:00.374088 ignition[1029]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Aug 13 00:01:00.374088 ignition[1029]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:01:00.374088 ignition[1029]: DEBUG : files: compiled without relabeling support, skipping Aug 13 00:01:00.408745 ignition[1029]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 00:01:00.408745 ignition[1029]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 00:01:00.459764 ignition[1029]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 00:01:00.468012 ignition[1029]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 00:01:00.480067 unknown[1029]: wrote ssh authorized keys file for user: core Aug 13 00:01:00.486126 ignition[1029]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 00:01:00.494492 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 13 00:01:00.494492 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 13 00:01:00.494492 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Aug 13 00:01:00.494492 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Aug 13 00:01:00.678841 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 13 00:01:01.527240 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Aug 13 00:01:01.538613 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 13 00:01:01.538613 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 00:01:01.538613 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:01:01.538613 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:01:01.538613 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:01:01.538613 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:01:01.538613 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:01:01.538613 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:01:01.619494 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:01:01.619494 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:01:01.619494 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 13 00:01:01.619494 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 13 00:01:01.619494 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Aug 13 00:01:01.619494 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition Aug 13 00:01:01.619494 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1689343704" Aug 13 00:01:01.619494 ignition[1029]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1689343704": device or resource busy Aug 13 00:01:01.619494 ignition[1029]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1689343704", trying btrfs: device or resource busy Aug 13 00:01:01.619494 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1689343704" Aug 13 00:01:01.619494 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1689343704" Aug 13 00:01:01.619494 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem1689343704" Aug 13 00:01:01.619494 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem1689343704" Aug 13 00:01:01.619494 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Aug 13 00:01:01.580366 systemd[1]: mnt-oem1689343704.mount: Deactivated successfully. Aug 13 00:01:01.799451 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Aug 13 00:01:01.799451 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(f): oem config not found in "/usr/share/oem", looking on oem partition Aug 13 00:01:01.799451 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(10): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem148165888" Aug 13 00:01:01.799451 ignition[1029]: CRITICAL : files: createFilesystemsFiles: createFiles: op(f): op(10): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem148165888": device or resource busy Aug 13 00:01:01.799451 ignition[1029]: ERROR : files: createFilesystemsFiles: createFiles: op(f): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem148165888", trying btrfs: device or resource busy Aug 13 00:01:01.799451 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem148165888" Aug 13 00:01:01.799451 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem148165888" Aug 13 00:01:01.799451 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [started] unmounting "/mnt/oem148165888" Aug 13 00:01:01.799451 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [finished] unmounting "/mnt/oem148165888" Aug 13 00:01:01.799451 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Aug 13 00:01:01.799451 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 13 00:01:01.799451 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Aug 13 00:01:02.159822 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET result: OK Aug 13 00:01:02.411341 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 13 00:01:02.424166 ignition[1029]: INFO : files: op(14): [started] processing unit "waagent.service" Aug 13 00:01:02.424166 ignition[1029]: INFO : files: op(14): [finished] processing unit "waagent.service" Aug 13 00:01:02.424166 ignition[1029]: INFO : files: op(15): [started] processing unit "nvidia.service" Aug 13 00:01:02.424166 ignition[1029]: INFO : files: op(15): [finished] processing unit "nvidia.service" Aug 13 00:01:02.424166 ignition[1029]: INFO : files: op(16): [started] processing unit "containerd.service" Aug 13 00:01:02.424166 ignition[1029]: INFO : files: op(16): op(17): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 13 00:01:02.424166 ignition[1029]: INFO : files: op(16): op(17): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 13 00:01:02.424166 ignition[1029]: INFO : files: op(16): [finished] processing unit "containerd.service" Aug 13 00:01:02.424166 ignition[1029]: INFO : files: op(18): [started] processing unit "prepare-helm.service" Aug 13 00:01:02.608603 kernel: audit: type=1130 audit(1755043262.443:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:02.608631 kernel: audit: type=1130 audit(1755043262.497:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:02.608641 kernel: audit: type=1131 audit(1755043262.497:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:02.608651 kernel: audit: type=1130 audit(1755043262.566:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:02.443000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:02.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:02.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:02.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:02.435236 systemd[1]: Finished ignition-files.service. Aug 13 00:01:02.613123 ignition[1029]: INFO : files: op(18): op(19): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:01:02.613123 ignition[1029]: INFO : files: op(18): op(19): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:01:02.613123 ignition[1029]: INFO : files: op(18): [finished] processing unit "prepare-helm.service" Aug 13 00:01:02.613123 ignition[1029]: INFO : files: op(1a): [started] setting preset to enabled for "waagent.service" Aug 13 00:01:02.613123 ignition[1029]: INFO : files: op(1a): [finished] setting preset to enabled for "waagent.service" Aug 13 00:01:02.613123 ignition[1029]: INFO : files: op(1b): [started] setting preset to enabled for "nvidia.service" Aug 13 00:01:02.613123 ignition[1029]: INFO : files: op(1b): [finished] setting preset to enabled for "nvidia.service" Aug 13 00:01:02.613123 ignition[1029]: INFO : files: op(1c): [started] setting preset to enabled for "prepare-helm.service" Aug 13 00:01:02.613123 ignition[1029]: INFO : files: op(1c): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 00:01:02.613123 ignition[1029]: INFO : files: createResultFile: createFiles: op(1d): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:01:02.613123 ignition[1029]: INFO : files: createResultFile: createFiles: op(1d): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:01:02.613123 ignition[1029]: INFO : files: files passed Aug 13 00:01:02.613123 ignition[1029]: INFO : Ignition finished successfully Aug 13 00:01:02.802001 kernel: audit: type=1130 audit(1755043262.640:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:02.802036 kernel: audit: type=1131 audit(1755043262.665:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:02.802048 kernel: audit: type=1130 audit(1755043262.778:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:02.640000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:02.665000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:02.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:02.467639 systemd[1]: Starting initrd-setup-root-after-ignition.service... Aug 13 00:01:02.475131 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Aug 13 00:01:02.819870 initrd-setup-root-after-ignition[1054]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:01:02.480628 systemd[1]: Starting ignition-quench.service... Aug 13 00:01:02.493392 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 00:01:02.493492 systemd[1]: Finished ignition-quench.service. Aug 13 00:01:02.881975 kernel: audit: type=1131 audit(1755043262.856:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:02.856000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:02.560878 systemd[1]: Finished initrd-setup-root-after-ignition.service. Aug 13 00:01:02.566958 systemd[1]: Reached target ignition-complete.target. Aug 13 00:01:02.596810 systemd[1]: Starting initrd-parse-etc.service... Aug 13 00:01:02.626280 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 00:01:02.626397 systemd[1]: Finished initrd-parse-etc.service. Aug 13 00:01:02.666100 systemd[1]: Reached target initrd-fs.target. Aug 13 00:01:02.693785 systemd[1]: Reached target initrd.target. Aug 13 00:01:02.707392 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Aug 13 00:01:02.708340 systemd[1]: Starting dracut-pre-pivot.service... Aug 13 00:01:02.769317 systemd[1]: Finished dracut-pre-pivot.service. Aug 13 00:01:02.779092 systemd[1]: Starting initrd-cleanup.service... Aug 13 00:01:03.007688 kernel: audit: type=1131 audit(1755043262.983:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:02.983000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:02.811078 systemd[1]: Stopped target nss-lookup.target. Aug 13 00:01:02.824609 systemd[1]: Stopped target remote-cryptsetup.target. Aug 13 00:01:03.015000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:03.022000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:02.838341 systemd[1]: Stopped target timers.target. Aug 13 00:01:03.030000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:02.846838 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 00:01:03.039000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:02.846951 systemd[1]: Stopped dracut-pre-pivot.service. Aug 13 00:01:02.857285 systemd[1]: Stopped target initrd.target. Aug 13 00:01:02.881428 systemd[1]: Stopped target basic.target. Aug 13 00:01:03.065984 ignition[1067]: INFO : Ignition 2.14.0 Aug 13 00:01:03.065984 ignition[1067]: INFO : Stage: umount Aug 13 00:01:03.065984 ignition[1067]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:01:03.065984 ignition[1067]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Aug 13 00:01:03.065984 ignition[1067]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:01:03.065984 ignition[1067]: INFO : umount: umount passed Aug 13 00:01:03.065984 ignition[1067]: INFO : Ignition finished successfully Aug 13 00:01:03.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:03.102000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:03.120000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:03.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:03.134000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:03.141000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:03.152778 iscsid[874]: iscsid shutting down. Aug 13 00:01:03.156000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:02.885776 systemd[1]: Stopped target ignition-complete.target. Aug 13 00:01:03.164000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:02.896071 systemd[1]: Stopped target ignition-diskful.target. Aug 13 00:01:02.906074 systemd[1]: Stopped target initrd-root-device.target. Aug 13 00:01:03.178000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:02.915863 systemd[1]: Stopped target remote-fs.target. Aug 13 00:01:02.926058 systemd[1]: Stopped target remote-fs-pre.target. Aug 13 00:01:03.191000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:02.935229 systemd[1]: Stopped target sysinit.target. Aug 13 00:01:02.943791 systemd[1]: Stopped target local-fs.target. Aug 13 00:01:02.952583 systemd[1]: Stopped target local-fs-pre.target. Aug 13 00:01:02.965224 systemd[1]: Stopped target swap.target. Aug 13 00:01:02.974235 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 00:01:02.974312 systemd[1]: Stopped dracut-pre-mount.service. Aug 13 00:01:02.983536 systemd[1]: Stopped target cryptsetup.target. Aug 13 00:01:03.008044 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 00:01:03.250000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:03.008104 systemd[1]: Stopped dracut-initqueue.service. Aug 13 00:01:03.016104 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 00:01:03.268000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:03.016165 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Aug 13 00:01:03.022355 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 00:01:03.022398 systemd[1]: Stopped ignition-files.service. Aug 13 00:01:03.031157 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Aug 13 00:01:03.031202 systemd[1]: Stopped flatcar-metadata-hostname.service. Aug 13 00:01:03.317000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:03.045199 systemd[1]: Stopping ignition-mount.service... Aug 13 00:01:03.327000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:03.327000 audit: BPF prog-id=6 op=UNLOAD Aug 13 00:01:03.062159 systemd[1]: Stopping iscsid.service... Aug 13 00:01:03.069969 systemd[1]: Stopping sysroot-boot.service... Aug 13 00:01:03.347000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:03.077408 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 00:01:03.357000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:03.077474 systemd[1]: Stopped systemd-udev-trigger.service. Aug 13 00:01:03.367000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:03.082383 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 00:01:03.082426 systemd[1]: Stopped dracut-pre-trigger.service. Aug 13 00:01:03.103186 systemd[1]: iscsid.service: Deactivated successfully. Aug 13 00:01:03.392000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:03.103317 systemd[1]: Stopped iscsid.service. Aug 13 00:01:03.120522 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 00:01:03.417000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:03.120598 systemd[1]: Finished initrd-cleanup.service. Aug 13 00:01:03.427000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:03.136359 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 00:01:03.435000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:03.136771 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 00:01:03.136863 systemd[1]: Stopped ignition-mount.service. Aug 13 00:01:03.142061 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 00:01:03.142116 systemd[1]: Stopped ignition-disks.service. Aug 13 00:01:03.465000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:03.157013 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 00:01:03.157066 systemd[1]: Stopped ignition-kargs.service. Aug 13 00:01:03.494048 kernel: hv_netvsc 0022487e-70c0-0022-487e-70c00022487e eth0: Data path switched from VF: enP28011s1 Aug 13 00:01:03.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:03.164678 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 00:01:03.499000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:03.164712 systemd[1]: Stopped ignition-fetch.service. Aug 13 00:01:03.508000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:03.178362 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 00:01:03.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:03.518000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:03.178406 systemd[1]: Stopped ignition-fetch-offline.service. Aug 13 00:01:03.529000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:03.191850 systemd[1]: Stopped target paths.target. Aug 13 00:01:03.200247 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 00:01:03.207912 systemd[1]: Stopped systemd-ask-password-console.path. Aug 13 00:01:03.213153 systemd[1]: Stopped target slices.target. Aug 13 00:01:03.553000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:03.225359 systemd[1]: Stopped target sockets.target. Aug 13 00:01:03.234814 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 00:01:03.234863 systemd[1]: Closed iscsid.socket. Aug 13 00:01:03.242264 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 00:01:03.242306 systemd[1]: Stopped ignition-setup.service. Aug 13 00:01:03.250935 systemd[1]: Stopping iscsiuio.service... Aug 13 00:01:03.261614 systemd[1]: iscsiuio.service: Deactivated successfully. Aug 13 00:01:03.261709 systemd[1]: Stopped iscsiuio.service. Aug 13 00:01:03.269287 systemd[1]: Stopped target network.target. Aug 13 00:01:03.279413 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 00:01:03.279441 systemd[1]: Closed iscsiuio.socket. Aug 13 00:01:03.288159 systemd[1]: Stopping systemd-networkd.service... Aug 13 00:01:03.623000 audit: BPF prog-id=5 op=UNLOAD Aug 13 00:01:03.623000 audit: BPF prog-id=4 op=UNLOAD Aug 13 00:01:03.623000 audit: BPF prog-id=3 op=UNLOAD Aug 13 00:01:03.623000 audit: BPF prog-id=8 op=UNLOAD Aug 13 00:01:03.623000 audit: BPF prog-id=7 op=UNLOAD Aug 13 00:01:03.297881 systemd[1]: Stopping systemd-resolved.service... Aug 13 00:01:03.305938 systemd-networkd[867]: eth0: DHCPv6 lease lost Aug 13 00:01:03.628000 audit: BPF prog-id=9 op=UNLOAD Aug 13 00:01:03.307253 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 00:01:03.307367 systemd[1]: Stopped systemd-resolved.service. Aug 13 00:01:03.646917 systemd-journald[276]: Received SIGTERM from PID 1 (n/a). Aug 13 00:01:03.318379 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 00:01:03.318472 systemd[1]: Stopped systemd-networkd.service. Aug 13 00:01:03.327592 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 00:01:03.327627 systemd[1]: Closed systemd-networkd.socket. Aug 13 00:01:03.336286 systemd[1]: Stopping network-cleanup.service... Aug 13 00:01:03.343446 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 00:01:03.343503 systemd[1]: Stopped parse-ip-for-networkd.service. Aug 13 00:01:03.348709 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:01:03.348763 systemd[1]: Stopped systemd-sysctl.service. Aug 13 00:01:03.363123 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 00:01:03.363173 systemd[1]: Stopped systemd-modules-load.service. Aug 13 00:01:03.369199 systemd[1]: Stopping systemd-udevd.service... Aug 13 00:01:03.378606 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 13 00:01:03.383568 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 00:01:03.383734 systemd[1]: Stopped systemd-udevd.service. Aug 13 00:01:03.392849 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 00:01:03.393031 systemd[1]: Closed systemd-udevd-control.socket. Aug 13 00:01:03.402981 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 00:01:03.403020 systemd[1]: Closed systemd-udevd-kernel.socket. Aug 13 00:01:03.412687 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 00:01:03.412742 systemd[1]: Stopped dracut-pre-udev.service. Aug 13 00:01:03.418266 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 00:01:03.418313 systemd[1]: Stopped dracut-cmdline.service. Aug 13 00:01:03.427547 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:01:03.427586 systemd[1]: Stopped dracut-cmdline-ask.service. Aug 13 00:01:03.440741 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Aug 13 00:01:03.455479 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 00:01:03.455565 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Aug 13 00:01:03.480082 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 00:01:03.480152 systemd[1]: Stopped kmod-static-nodes.service. Aug 13 00:01:03.490222 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:01:03.490270 systemd[1]: Stopped systemd-vconsole-setup.service. Aug 13 00:01:03.500452 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Aug 13 00:01:03.500954 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 00:01:03.501066 systemd[1]: Stopped sysroot-boot.service. Aug 13 00:01:03.509068 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 00:01:03.509151 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Aug 13 00:01:03.518712 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 00:01:03.518759 systemd[1]: Stopped initrd-setup-root.service. Aug 13 00:01:03.544563 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 00:01:03.544660 systemd[1]: Stopped network-cleanup.service. Aug 13 00:01:03.553920 systemd[1]: Reached target initrd-switch-root.target. Aug 13 00:01:03.563963 systemd[1]: Starting initrd-switch-root.service... Aug 13 00:01:03.619621 systemd[1]: Switching root. Aug 13 00:01:03.648012 systemd-journald[276]: Journal stopped Aug 13 00:01:19.908826 kernel: SELinux: Class mctp_socket not defined in policy. Aug 13 00:01:19.908847 kernel: SELinux: Class anon_inode not defined in policy. Aug 13 00:01:19.908858 kernel: SELinux: the above unknown classes and permissions will be allowed Aug 13 00:01:19.908868 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 00:01:19.908876 kernel: SELinux: policy capability open_perms=1 Aug 13 00:01:19.908884 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 00:01:19.908949 kernel: SELinux: policy capability always_check_network=0 Aug 13 00:01:19.908959 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 00:01:19.908967 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 00:01:19.908975 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 00:01:19.908983 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 00:01:19.908993 kernel: kauditd_printk_skb: 41 callbacks suppressed Aug 13 00:01:19.909001 kernel: audit: type=1403 audit(1755043266.586:88): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 00:01:19.909011 systemd[1]: Successfully loaded SELinux policy in 273.144ms. Aug 13 00:01:19.909023 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.385ms. Aug 13 00:01:19.909034 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Aug 13 00:01:19.909046 systemd[1]: Detected virtualization microsoft. Aug 13 00:01:19.909054 systemd[1]: Detected architecture arm64. Aug 13 00:01:19.909063 systemd[1]: Detected first boot. Aug 13 00:01:19.909072 systemd[1]: Hostname set to . Aug 13 00:01:19.909082 systemd[1]: Initializing machine ID from random generator. Aug 13 00:01:19.909091 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Aug 13 00:01:19.909102 kernel: audit: type=1400 audit(1755043271.358:89): avc: denied { associate } for pid=1117 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Aug 13 00:01:19.909112 kernel: audit: type=1300 audit(1755043271.358:89): arch=c00000b7 syscall=5 success=yes exit=0 a0=400002221c a1=40000282b8 a2=4000026440 a3=32 items=0 ppid=1100 pid=1117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:01:19.909122 kernel: audit: type=1327 audit(1755043271.358:89): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Aug 13 00:01:19.909130 kernel: audit: type=1400 audit(1755043271.370:90): avc: denied { associate } for pid=1117 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Aug 13 00:01:19.909140 kernel: audit: type=1300 audit(1755043271.370:90): arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40000222f5 a2=1ed a3=0 items=2 ppid=1100 pid=1117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:01:19.909150 kernel: audit: type=1307 audit(1755043271.370:90): cwd="/" Aug 13 00:01:19.909159 kernel: audit: type=1302 audit(1755043271.370:90): item=0 name=(null) inode=2 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:01:19.909168 kernel: audit: type=1302 audit(1755043271.370:90): item=1 name=(null) inode=3 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:01:19.909177 kernel: audit: type=1327 audit(1755043271.370:90): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Aug 13 00:01:19.909186 systemd[1]: Populated /etc with preset unit settings. Aug 13 00:01:19.909195 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:01:19.909205 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:01:19.909216 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:01:19.909226 systemd[1]: Queued start job for default target multi-user.target. Aug 13 00:01:19.909234 systemd[1]: Unnecessary job was removed for dev-sda6.device. Aug 13 00:01:19.909245 systemd[1]: Created slice system-addon\x2dconfig.slice. Aug 13 00:01:19.909254 systemd[1]: Created slice system-addon\x2drun.slice. Aug 13 00:01:19.909263 systemd[1]: Created slice system-getty.slice. Aug 13 00:01:19.909274 systemd[1]: Created slice system-modprobe.slice. Aug 13 00:01:19.909285 systemd[1]: Created slice system-serial\x2dgetty.slice. Aug 13 00:01:19.909294 systemd[1]: Created slice system-system\x2dcloudinit.slice. Aug 13 00:01:19.909304 systemd[1]: Created slice system-systemd\x2dfsck.slice. Aug 13 00:01:19.909313 systemd[1]: Created slice user.slice. Aug 13 00:01:19.909323 systemd[1]: Started systemd-ask-password-console.path. Aug 13 00:01:19.909332 systemd[1]: Started systemd-ask-password-wall.path. Aug 13 00:01:19.909341 systemd[1]: Set up automount boot.automount. Aug 13 00:01:19.909350 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Aug 13 00:01:19.909360 systemd[1]: Reached target integritysetup.target. Aug 13 00:01:19.909370 systemd[1]: Reached target remote-cryptsetup.target. Aug 13 00:01:19.909379 systemd[1]: Reached target remote-fs.target. Aug 13 00:01:19.909388 systemd[1]: Reached target slices.target. Aug 13 00:01:19.909398 systemd[1]: Reached target swap.target. Aug 13 00:01:19.909407 systemd[1]: Reached target torcx.target. Aug 13 00:01:19.909416 systemd[1]: Reached target veritysetup.target. Aug 13 00:01:19.909425 systemd[1]: Listening on systemd-coredump.socket. Aug 13 00:01:19.909435 systemd[1]: Listening on systemd-initctl.socket. Aug 13 00:01:19.909446 kernel: audit: type=1400 audit(1755043279.436:91): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Aug 13 00:01:19.909455 systemd[1]: Listening on systemd-journald-audit.socket. Aug 13 00:01:19.909465 kernel: audit: type=1335 audit(1755043279.436:92): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Aug 13 00:01:19.909474 systemd[1]: Listening on systemd-journald-dev-log.socket. Aug 13 00:01:19.909484 systemd[1]: Listening on systemd-journald.socket. Aug 13 00:01:19.909493 systemd[1]: Listening on systemd-networkd.socket. Aug 13 00:01:19.909502 systemd[1]: Listening on systemd-udevd-control.socket. Aug 13 00:01:19.909513 systemd[1]: Listening on systemd-udevd-kernel.socket. Aug 13 00:01:19.909522 systemd[1]: Listening on systemd-userdbd.socket. Aug 13 00:01:19.909532 systemd[1]: Mounting dev-hugepages.mount... Aug 13 00:01:19.909541 systemd[1]: Mounting dev-mqueue.mount... Aug 13 00:01:19.909550 systemd[1]: Mounting media.mount... Aug 13 00:01:19.909559 systemd[1]: Mounting sys-kernel-debug.mount... Aug 13 00:01:19.909572 systemd[1]: Mounting sys-kernel-tracing.mount... Aug 13 00:01:19.909581 systemd[1]: Mounting tmp.mount... Aug 13 00:01:19.909591 systemd[1]: Starting flatcar-tmpfiles.service... Aug 13 00:01:19.909601 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:01:19.909610 systemd[1]: Starting kmod-static-nodes.service... Aug 13 00:01:19.909619 systemd[1]: Starting modprobe@configfs.service... Aug 13 00:01:19.909629 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:01:19.909638 systemd[1]: Starting modprobe@drm.service... Aug 13 00:01:19.909649 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:01:19.909659 systemd[1]: Starting modprobe@fuse.service... Aug 13 00:01:19.909669 systemd[1]: Starting modprobe@loop.service... Aug 13 00:01:19.909678 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 00:01:19.909688 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Aug 13 00:01:19.909698 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Aug 13 00:01:19.909707 systemd[1]: Starting systemd-journald.service... Aug 13 00:01:19.909716 kernel: loop: module loaded Aug 13 00:01:19.909725 systemd[1]: Starting systemd-modules-load.service... Aug 13 00:01:19.909734 systemd[1]: Starting systemd-network-generator.service... Aug 13 00:01:19.909745 systemd[1]: Starting systemd-remount-fs.service... Aug 13 00:01:19.909755 systemd[1]: Starting systemd-udev-trigger.service... Aug 13 00:01:19.909763 kernel: fuse: init (API version 7.34) Aug 13 00:01:19.909772 systemd[1]: Mounted dev-hugepages.mount. Aug 13 00:01:19.909781 systemd[1]: Mounted dev-mqueue.mount. Aug 13 00:01:19.909790 systemd[1]: Mounted media.mount. Aug 13 00:01:19.909800 kernel: audit: type=1305 audit(1755043279.905:93): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Aug 13 00:01:19.909813 systemd-journald[1237]: Journal started Aug 13 00:01:19.909856 systemd-journald[1237]: Runtime Journal (/run/log/journal/7a040348d9014703a34b97faf7fd8f66) is 8.0M, max 78.5M, 70.5M free. Aug 13 00:01:19.436000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Aug 13 00:01:19.905000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Aug 13 00:01:19.924025 systemd[1]: Started systemd-journald.service. Aug 13 00:01:19.924087 kernel: audit: type=1300 audit(1755043279.905:93): arch=c00000b7 syscall=211 success=yes exit=60 a0=5 a1=ffffd827aea0 a2=4000 a3=1 items=0 ppid=1 pid=1237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:01:19.905000 audit[1237]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=5 a1=ffffd827aea0 a2=4000 a3=1 items=0 ppid=1 pid=1237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:01:19.905000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Aug 13 00:01:19.964862 kernel: audit: type=1327 audit(1755043279.905:93): proctitle="/usr/lib/systemd/systemd-journald" Aug 13 00:01:19.964923 kernel: audit: type=1130 audit(1755043279.955:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:19.955000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:19.965595 systemd[1]: Mounted sys-kernel-debug.mount. Aug 13 00:01:19.988082 systemd[1]: Mounted sys-kernel-tracing.mount. Aug 13 00:01:19.993029 systemd[1]: Mounted tmp.mount. Aug 13 00:01:19.997125 systemd[1]: Finished flatcar-tmpfiles.service. Aug 13 00:01:20.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:20.002535 systemd[1]: Finished kmod-static-nodes.service. Aug 13 00:01:20.027984 kernel: audit: type=1130 audit(1755043280.001:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:20.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:20.028718 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 00:01:20.029140 systemd[1]: Finished modprobe@configfs.service. Aug 13 00:01:20.052954 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:01:20.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:20.060158 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:01:20.073554 kernel: audit: type=1130 audit(1755043280.027:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:20.102225 kernel: audit: type=1130 audit(1755043280.051:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:20.102248 kernel: audit: type=1131 audit(1755043280.051:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:20.051000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:20.114669 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:01:20.114869 systemd[1]: Finished modprobe@drm.service. Aug 13 00:01:20.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:20.113000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:20.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:20.119000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:20.120480 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:01:20.120651 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:01:20.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:20.125000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:20.126153 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 00:01:20.126313 systemd[1]: Finished modprobe@fuse.service. Aug 13 00:01:20.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:20.130000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:20.131209 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:01:20.131382 systemd[1]: Finished modprobe@loop.service. Aug 13 00:01:20.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:20.135000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:20.136864 systemd[1]: Finished systemd-modules-load.service. Aug 13 00:01:20.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:20.142636 systemd[1]: Finished systemd-network-generator.service. Aug 13 00:01:20.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:20.148588 systemd[1]: Finished systemd-remount-fs.service. Aug 13 00:01:20.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:20.154609 systemd[1]: Finished systemd-udev-trigger.service. Aug 13 00:01:20.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:20.160840 systemd[1]: Reached target network-pre.target. Aug 13 00:01:20.166953 systemd[1]: Mounting sys-fs-fuse-connections.mount... Aug 13 00:01:20.172917 systemd[1]: Mounting sys-kernel-config.mount... Aug 13 00:01:20.177389 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 00:01:20.217013 systemd[1]: Starting systemd-hwdb-update.service... Aug 13 00:01:20.222778 systemd[1]: Starting systemd-journal-flush.service... Aug 13 00:01:20.227604 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:01:20.228837 systemd[1]: Starting systemd-random-seed.service... Aug 13 00:01:20.233440 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:01:20.234680 systemd[1]: Starting systemd-sysctl.service... Aug 13 00:01:20.240549 systemd[1]: Starting systemd-sysusers.service... Aug 13 00:01:20.246540 systemd[1]: Starting systemd-udev-settle.service... Aug 13 00:01:20.254121 systemd[1]: Mounted sys-fs-fuse-connections.mount. Aug 13 00:01:20.259588 systemd[1]: Mounted sys-kernel-config.mount. Aug 13 00:01:20.266416 udevadm[1267]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Aug 13 00:01:20.278238 systemd-journald[1237]: Time spent on flushing to /var/log/journal/7a040348d9014703a34b97faf7fd8f66 is 13.474ms for 1040 entries. Aug 13 00:01:20.278238 systemd-journald[1237]: System Journal (/var/log/journal/7a040348d9014703a34b97faf7fd8f66) is 8.0M, max 2.6G, 2.6G free. Aug 13 00:01:20.347588 systemd-journald[1237]: Received client request to flush runtime journal. Aug 13 00:01:20.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:20.292775 systemd[1]: Finished systemd-random-seed.service. Aug 13 00:01:20.298354 systemd[1]: Reached target first-boot-complete.target. Aug 13 00:01:20.348748 systemd[1]: Finished systemd-journal-flush.service. Aug 13 00:01:20.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:20.407495 systemd[1]: Finished systemd-sysctl.service. Aug 13 00:01:20.411000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:21.457503 systemd[1]: Finished systemd-sysusers.service. Aug 13 00:01:21.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:21.463835 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Aug 13 00:01:23.268277 systemd[1]: Finished systemd-hwdb-update.service. Aug 13 00:01:23.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:23.602931 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Aug 13 00:01:23.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:23.610774 systemd[1]: Starting systemd-udevd.service... Aug 13 00:01:23.630830 systemd-udevd[1278]: Using default interface naming scheme 'v252'. Aug 13 00:01:24.615131 systemd[1]: Started systemd-udevd.service. Aug 13 00:01:24.650046 kernel: kauditd_printk_skb: 20 callbacks suppressed Aug 13 00:01:24.650137 kernel: audit: type=1130 audit(1755043284.623:119): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:24.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:24.626754 systemd[1]: Starting systemd-networkd.service... Aug 13 00:01:24.672296 systemd[1]: Found device dev-ttyAMA0.device. Aug 13 00:01:24.771931 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 00:01:24.856000 audit[1291]: AVC avc: denied { confidentiality } for pid=1291 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Aug 13 00:01:24.884908 kernel: audit: type=1400 audit(1755043284.856:120): avc: denied { confidentiality } for pid=1291 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Aug 13 00:01:24.884947 kernel: hv_vmbus: registering driver hv_balloon Aug 13 00:01:24.899559 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Aug 13 00:01:24.899651 kernel: hv_balloon: Memory hot add disabled on ARM64 Aug 13 00:01:24.856000 audit[1291]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=aaaadb06c9b0 a1=aa2c a2=ffffa41e24b0 a3=aaaadafc8010 items=12 ppid=1278 pid=1291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:01:24.856000 audit: CWD cwd="/" Aug 13 00:01:24.936647 kernel: audit: type=1300 audit(1755043284.856:120): arch=c00000b7 syscall=105 success=yes exit=0 a0=aaaadb06c9b0 a1=aa2c a2=ffffa41e24b0 a3=aaaadafc8010 items=12 ppid=1278 pid=1291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:01:24.936731 kernel: audit: type=1307 audit(1755043284.856:120): cwd="/" Aug 13 00:01:24.936750 kernel: audit: type=1302 audit(1755043284.856:120): item=0 name=(null) inode=5705 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:01:24.856000 audit: PATH item=0 name=(null) inode=5705 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:01:24.856000 audit: PATH item=1 name=(null) inode=10713 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:01:24.973562 kernel: audit: type=1302 audit(1755043284.856:120): item=1 name=(null) inode=10713 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:01:24.856000 audit: PATH item=2 name=(null) inode=10713 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:01:24.993633 kernel: audit: type=1302 audit(1755043284.856:120): item=2 name=(null) inode=10713 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:01:24.993733 kernel: audit: type=1302 audit(1755043284.856:120): item=3 name=(null) inode=10714 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:01:24.856000 audit: PATH item=3 name=(null) inode=10714 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:01:24.856000 audit: PATH item=4 name=(null) inode=10713 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:01:25.016926 systemd[1]: Starting systemd-userdbd.service... Aug 13 00:01:25.039291 kernel: audit: type=1302 audit(1755043284.856:120): item=4 name=(null) inode=10713 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:01:24.856000 audit: PATH item=5 name=(null) inode=10715 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:01:25.058659 kernel: audit: type=1302 audit(1755043284.856:120): item=5 name=(null) inode=10715 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:01:24.856000 audit: PATH item=6 name=(null) inode=10713 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:01:24.856000 audit: PATH item=7 name=(null) inode=10716 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:01:24.856000 audit: PATH item=8 name=(null) inode=10713 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:01:24.856000 audit: PATH item=9 name=(null) inode=10717 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:01:24.856000 audit: PATH item=10 name=(null) inode=10713 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:01:24.856000 audit: PATH item=11 name=(null) inode=10718 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:01:24.856000 audit: PROCTITLE proctitle="(udev-worker)" Aug 13 00:01:25.080943 kernel: hv_vmbus: registering driver hyperv_fb Aug 13 00:01:25.094363 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Aug 13 00:01:25.094445 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Aug 13 00:01:25.100382 kernel: Console: switching to colour dummy device 80x25 Aug 13 00:01:25.103908 kernel: Console: switching to colour frame buffer device 128x48 Aug 13 00:01:25.124915 kernel: hv_utils: Registering HyperV Utility Driver Aug 13 00:01:25.124991 kernel: hv_vmbus: registering driver hv_utils Aug 13 00:01:25.125007 kernel: hv_utils: Heartbeat IC version 3.0 Aug 13 00:01:24.684175 kernel: hv_utils: Shutdown IC version 3.2 Aug 13 00:01:25.613169 kernel: hv_utils: TimeSync IC version 4.0 Aug 13 00:01:25.613210 systemd-journald[1237]: Time jumped backwards, rotating. Aug 13 00:01:25.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:25.125441 systemd[1]: Started systemd-userdbd.service. Aug 13 00:01:26.292908 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Aug 13 00:01:26.322303 systemd[1]: Finished systemd-udev-settle.service. Aug 13 00:01:26.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:26.329927 systemd[1]: Starting lvm2-activation-early.service... Aug 13 00:01:26.981392 lvm[1356]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:01:27.037786 systemd[1]: Finished lvm2-activation-early.service. Aug 13 00:01:27.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:27.043372 systemd[1]: Reached target cryptsetup.target. Aug 13 00:01:27.049480 systemd[1]: Starting lvm2-activation.service... Aug 13 00:01:27.054454 lvm[1358]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:01:27.071117 systemd-networkd[1296]: lo: Link UP Aug 13 00:01:27.071738 systemd-networkd[1296]: lo: Gained carrier Aug 13 00:01:27.072275 systemd-networkd[1296]: Enumeration completed Aug 13 00:01:27.072490 systemd[1]: Started systemd-networkd.service. Aug 13 00:01:27.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:27.079170 systemd[1]: Starting systemd-networkd-wait-online.service... Aug 13 00:01:27.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:27.085780 systemd[1]: Finished lvm2-activation.service. Aug 13 00:01:27.090742 systemd[1]: Reached target local-fs-pre.target. Aug 13 00:01:27.096118 systemd-networkd[1296]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:01:27.096434 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 00:01:27.096461 systemd[1]: Reached target local-fs.target. Aug 13 00:01:27.101097 systemd[1]: Reached target machines.target. Aug 13 00:01:27.107107 systemd[1]: Starting ldconfig.service... Aug 13 00:01:27.139561 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:01:27.139639 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:01:27.140925 systemd[1]: Starting systemd-boot-update.service... Aug 13 00:01:27.146750 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Aug 13 00:01:27.153573 systemd[1]: Starting systemd-machine-id-commit.service... Aug 13 00:01:27.159997 systemd[1]: Starting systemd-sysext.service... Aug 13 00:01:27.163687 kernel: mlx5_core 6d6b:00:02.0 enP28011s1: Link up Aug 13 00:01:27.213682 kernel: hv_netvsc 0022487e-70c0-0022-487e-70c00022487e eth0: Data path switched to VF: enP28011s1 Aug 13 00:01:27.215832 systemd-networkd[1296]: enP28011s1: Link UP Aug 13 00:01:27.215950 systemd-networkd[1296]: eth0: Link UP Aug 13 00:01:27.215953 systemd-networkd[1296]: eth0: Gained carrier Aug 13 00:01:27.221952 systemd-networkd[1296]: enP28011s1: Gained carrier Aug 13 00:01:27.229770 systemd-networkd[1296]: eth0: DHCPv4 address 10.200.20.35/24, gateway 10.200.20.1 acquired from 168.63.129.16 Aug 13 00:01:27.267231 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1362 (bootctl) Aug 13 00:01:27.268555 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Aug 13 00:01:27.477802 systemd[1]: Unmounting usr-share-oem.mount... Aug 13 00:01:27.483141 systemd[1]: usr-share-oem.mount: Deactivated successfully. Aug 13 00:01:27.483386 systemd[1]: Unmounted usr-share-oem.mount. Aug 13 00:01:27.959896 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Aug 13 00:01:27.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:27.985682 kernel: loop0: detected capacity change from 0 to 203944 Aug 13 00:01:29.293817 systemd-networkd[1296]: eth0: Gained IPv6LL Aug 13 00:01:29.298755 systemd[1]: Finished systemd-networkd-wait-online.service. Aug 13 00:01:29.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:29.310343 kernel: kauditd_printk_skb: 13 callbacks suppressed Aug 13 00:01:29.310440 kernel: audit: type=1130 audit(1755043289.304:127): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:29.652477 systemd-fsck[1374]: fsck.fat 4.2 (2021-01-31) Aug 13 00:01:29.652477 systemd-fsck[1374]: /dev/sda1: 236 files, 117307/258078 clusters Aug 13 00:01:29.654311 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Aug 13 00:01:29.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:29.663651 systemd[1]: Mounting boot.mount... Aug 13 00:01:29.685166 kernel: audit: type=1130 audit(1755043289.661:128): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:30.358115 kernel: audit: type=1130 audit(1755043290.030:129): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:30.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:30.012208 systemd[1]: Mounted boot.mount. Aug 13 00:01:30.025925 systemd[1]: Finished systemd-boot-update.service. Aug 13 00:01:30.956301 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 00:01:31.174697 kernel: loop1: detected capacity change from 0 to 203944 Aug 13 00:01:31.808004 (sd-sysext)[1388]: Using extensions 'kubernetes'. Aug 13 00:01:31.809514 (sd-sysext)[1388]: Merged extensions into '/usr'. Aug 13 00:01:31.826723 systemd[1]: Mounting usr-share-oem.mount... Aug 13 00:01:31.831036 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:01:31.832418 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:01:31.838069 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:01:31.844263 systemd[1]: Starting modprobe@loop.service... Aug 13 00:01:31.848474 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:01:31.848634 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:01:31.851511 systemd[1]: Mounted usr-share-oem.mount. Aug 13 00:01:31.856796 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:01:31.857127 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:01:31.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:31.862555 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:01:31.862724 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:01:31.861000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:31.902538 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:01:31.902901 systemd[1]: Finished modprobe@loop.service. Aug 13 00:01:31.902998 kernel: audit: type=1130 audit(1755043291.861:130): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:31.903036 kernel: audit: type=1131 audit(1755043291.861:131): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:31.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:31.926055 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:01:31.926336 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:01:31.926551 kernel: audit: type=1130 audit(1755043291.901:132): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:31.901000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:31.928190 systemd[1]: Finished systemd-sysext.service. Aug 13 00:01:31.946738 kernel: audit: type=1131 audit(1755043291.901:133): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:31.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:31.969214 kernel: audit: type=1130 audit(1755043291.924:134): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:31.969294 kernel: audit: type=1131 audit(1755043291.924:135): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:31.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:31.970063 systemd[1]: Starting ensure-sysext.service... Aug 13 00:01:31.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:32.005284 kernel: audit: type=1130 audit(1755043291.967:136): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:32.011099 systemd[1]: Starting systemd-tmpfiles-setup.service... Aug 13 00:01:32.019413 systemd[1]: Reloading. Aug 13 00:01:32.071781 /usr/lib/systemd/system-generators/torcx-generator[1422]: time="2025-08-13T00:01:32Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 00:01:32.075482 systemd-tmpfiles[1402]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Aug 13 00:01:32.080199 /usr/lib/systemd/system-generators/torcx-generator[1422]: time="2025-08-13T00:01:32Z" level=info msg="torcx already run" Aug 13 00:01:32.161964 systemd-tmpfiles[1402]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 00:01:32.162830 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:01:32.162845 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:01:32.179830 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:01:32.220909 systemd-tmpfiles[1402]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 00:01:32.257945 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:01:32.259336 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:01:32.266740 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:01:32.272287 systemd[1]: Starting modprobe@loop.service... Aug 13 00:01:32.276298 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:01:32.276433 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:01:32.277305 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:01:32.277483 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:01:32.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:32.281000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:32.282905 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:01:32.283057 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:01:32.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:32.287000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:32.288635 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:01:32.288960 systemd[1]: Finished modprobe@loop.service. Aug 13 00:01:32.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:32.293000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:32.295219 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:01:32.296585 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:01:32.302023 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:01:32.307745 systemd[1]: Starting modprobe@loop.service... Aug 13 00:01:32.311915 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:01:32.312051 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:01:32.312861 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:01:32.313036 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:01:32.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:32.317000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:32.318313 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:01:32.318474 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:01:32.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:32.322000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:32.323788 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:01:32.323998 systemd[1]: Finished modprobe@loop.service. Aug 13 00:01:32.328000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:32.328000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:32.331244 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:01:32.332649 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:01:32.338335 systemd[1]: Starting modprobe@drm.service... Aug 13 00:01:32.344882 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:01:32.350793 systemd[1]: Starting modprobe@loop.service... Aug 13 00:01:32.355201 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:01:32.355335 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:01:32.356279 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:01:32.356446 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:01:32.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:32.361000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:32.362170 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:01:32.362324 systemd[1]: Finished modprobe@drm.service. Aug 13 00:01:32.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:32.366000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:32.367364 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:01:32.367513 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:01:32.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:32.371000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:32.372626 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:01:32.372948 systemd[1]: Finished modprobe@loop.service. Aug 13 00:01:32.377000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:32.377000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:32.378257 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:01:32.378346 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:01:32.379427 systemd[1]: Finished ensure-sysext.service. Aug 13 00:01:32.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:33.012123 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 00:01:33.012819 systemd[1]: Finished systemd-machine-id-commit.service. Aug 13 00:01:33.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:36.013349 systemd[1]: Finished systemd-tmpfiles-setup.service. Aug 13 00:01:36.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:36.021080 systemd[1]: Starting audit-rules.service... Aug 13 00:01:36.025070 kernel: kauditd_printk_skb: 22 callbacks suppressed Aug 13 00:01:36.025154 kernel: audit: type=1130 audit(1755043296.018:159): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:36.050964 systemd[1]: Starting clean-ca-certificates.service... Aug 13 00:01:36.057646 systemd[1]: Starting systemd-journal-catalog-update.service... Aug 13 00:01:36.064777 systemd[1]: Starting systemd-resolved.service... Aug 13 00:01:36.070939 systemd[1]: Starting systemd-timesyncd.service... Aug 13 00:01:36.077383 systemd[1]: Starting systemd-update-utmp.service... Aug 13 00:01:36.082621 systemd[1]: Finished clean-ca-certificates.service. Aug 13 00:01:36.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:36.092918 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:01:36.108785 kernel: audit: type=1130 audit(1755043296.087:160): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:36.146000 audit[1521]: SYSTEM_BOOT pid=1521 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Aug 13 00:01:36.168527 systemd[1]: Finished systemd-update-utmp.service. Aug 13 00:01:36.173789 kernel: audit: type=1127 audit(1755043296.146:161): pid=1521 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Aug 13 00:01:36.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:36.196907 kernel: audit: type=1130 audit(1755043296.172:162): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:36.266672 systemd[1]: Started systemd-timesyncd.service. Aug 13 00:01:36.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:36.271797 systemd[1]: Reached target time-set.target. Aug 13 00:01:36.294904 kernel: audit: type=1130 audit(1755043296.270:163): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:36.317333 systemd-resolved[1519]: Positive Trust Anchors: Aug 13 00:01:36.317349 systemd-resolved[1519]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:01:36.317374 systemd-resolved[1519]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Aug 13 00:01:36.445063 systemd[1]: Finished systemd-journal-catalog-update.service. Aug 13 00:01:36.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:36.473770 kernel: audit: type=1130 audit(1755043296.450:164): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:36.490845 systemd-resolved[1519]: Using system hostname 'ci-3510.3.8-a-dd293077f6'. Aug 13 00:01:36.492524 systemd[1]: Started systemd-resolved.service. Aug 13 00:01:36.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:36.497557 systemd[1]: Reached target network.target. Aug 13 00:01:36.519124 systemd[1]: Reached target network-online.target. Aug 13 00:01:36.521744 kernel: audit: type=1130 audit(1755043296.496:165): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:36.525967 systemd[1]: Reached target nss-lookup.target. Aug 13 00:01:36.561201 augenrules[1537]: No rules Aug 13 00:01:36.560000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Aug 13 00:01:36.562464 systemd[1]: Finished audit-rules.service. Aug 13 00:01:36.560000 audit[1537]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc8edb580 a2=420 a3=0 items=0 ppid=1514 pid=1537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:01:36.606473 kernel: audit: type=1305 audit(1755043296.560:166): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Aug 13 00:01:36.606553 kernel: audit: type=1300 audit(1755043296.560:166): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc8edb580 a2=420 a3=0 items=0 ppid=1514 pid=1537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:01:36.560000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Aug 13 00:01:36.619680 kernel: audit: type=1327 audit(1755043296.560:166): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Aug 13 00:01:36.668447 systemd-timesyncd[1520]: Contacted time server 85.209.17.10:123 (0.flatcar.pool.ntp.org). Aug 13 00:01:36.668883 systemd-timesyncd[1520]: Initial clock synchronization to Wed 2025-08-13 00:01:36.671046 UTC. Aug 13 00:01:42.264312 ldconfig[1361]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 00:01:42.272968 systemd[1]: Finished ldconfig.service. Aug 13 00:01:42.279751 systemd[1]: Starting systemd-update-done.service... Aug 13 00:01:42.325333 systemd[1]: Finished systemd-update-done.service. Aug 13 00:01:42.330907 systemd[1]: Reached target sysinit.target. Aug 13 00:01:42.335879 systemd[1]: Started motdgen.path. Aug 13 00:01:42.339804 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Aug 13 00:01:42.346434 systemd[1]: Started logrotate.timer. Aug 13 00:01:42.350992 systemd[1]: Started mdadm.timer. Aug 13 00:01:42.355043 systemd[1]: Started systemd-tmpfiles-clean.timer. Aug 13 00:01:42.359964 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 00:01:42.359997 systemd[1]: Reached target paths.target. Aug 13 00:01:42.364334 systemd[1]: Reached target timers.target. Aug 13 00:01:42.369017 systemd[1]: Listening on dbus.socket. Aug 13 00:01:42.374578 systemd[1]: Starting docker.socket... Aug 13 00:01:42.404584 systemd[1]: Listening on sshd.socket. Aug 13 00:01:42.409015 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:01:42.409476 systemd[1]: Listening on docker.socket. Aug 13 00:01:42.413967 systemd[1]: Reached target sockets.target. Aug 13 00:01:42.418462 systemd[1]: Reached target basic.target. Aug 13 00:01:42.423085 systemd[1]: System is tainted: cgroupsv1 Aug 13 00:01:42.423141 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Aug 13 00:01:42.423162 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Aug 13 00:01:42.424456 systemd[1]: Starting containerd.service... Aug 13 00:01:42.429963 systemd[1]: Starting dbus.service... Aug 13 00:01:42.434898 systemd[1]: Starting enable-oem-cloudinit.service... Aug 13 00:01:42.440902 systemd[1]: Starting extend-filesystems.service... Aug 13 00:01:42.445608 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Aug 13 00:01:42.459268 systemd[1]: Starting kubelet.service... Aug 13 00:01:42.464564 systemd[1]: Starting motdgen.service... Aug 13 00:01:42.469975 systemd[1]: Started nvidia.service. Aug 13 00:01:42.489227 systemd[1]: Starting prepare-helm.service... Aug 13 00:01:42.495293 systemd[1]: Starting ssh-key-proc-cmdline.service... Aug 13 00:01:42.501413 systemd[1]: Starting sshd-keygen.service... Aug 13 00:01:42.509576 systemd[1]: Starting systemd-logind.service... Aug 13 00:01:42.514684 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:01:42.514764 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 00:01:42.516147 systemd[1]: Starting update-engine.service... Aug 13 00:01:42.523378 jq[1552]: false Aug 13 00:01:42.523713 systemd[1]: Starting update-ssh-keys-after-ignition.service... Aug 13 00:01:42.532109 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 00:01:42.532850 systemd[1]: Finished ssh-key-proc-cmdline.service. Aug 13 00:01:42.541385 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 00:01:42.541620 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Aug 13 00:01:42.544587 jq[1570]: true Aug 13 00:01:42.546340 extend-filesystems[1553]: Found loop1 Aug 13 00:01:42.546340 extend-filesystems[1553]: Found sda Aug 13 00:01:42.546340 extend-filesystems[1553]: Found sda1 Aug 13 00:01:42.546340 extend-filesystems[1553]: Found sda2 Aug 13 00:01:42.546340 extend-filesystems[1553]: Found sda3 Aug 13 00:01:42.546340 extend-filesystems[1553]: Found usr Aug 13 00:01:42.546340 extend-filesystems[1553]: Found sda4 Aug 13 00:01:42.546340 extend-filesystems[1553]: Found sda6 Aug 13 00:01:42.546340 extend-filesystems[1553]: Found sda7 Aug 13 00:01:42.546340 extend-filesystems[1553]: Found sda9 Aug 13 00:01:42.546340 extend-filesystems[1553]: Checking size of /dev/sda9 Aug 13 00:01:42.590701 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 00:01:42.649555 jq[1578]: true Aug 13 00:01:42.590985 systemd[1]: Finished motdgen.service. Aug 13 00:01:42.679372 env[1583]: time="2025-08-13T00:01:42.676115072Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Aug 13 00:01:42.697562 extend-filesystems[1553]: Old size kept for /dev/sda9 Aug 13 00:01:42.723455 extend-filesystems[1553]: Found sr0 Aug 13 00:01:42.703437 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 00:01:42.703731 systemd[1]: Finished extend-filesystems.service. Aug 13 00:01:42.705045 systemd-logind[1566]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Aug 13 00:01:42.706135 systemd-logind[1566]: New seat seat0. Aug 13 00:01:42.741685 tar[1573]: linux-arm64/helm Aug 13 00:01:42.764500 bash[1613]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:01:42.765264 systemd[1]: Finished update-ssh-keys-after-ignition.service. Aug 13 00:01:42.775693 env[1583]: time="2025-08-13T00:01:42.775632884Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 00:01:42.775974 env[1583]: time="2025-08-13T00:01:42.775953204Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:01:42.778167 env[1583]: time="2025-08-13T00:01:42.778124591Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.189-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:01:42.778167 env[1583]: time="2025-08-13T00:01:42.778163156Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:01:42.778482 env[1583]: time="2025-08-13T00:01:42.778444551Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:01:42.778482 env[1583]: time="2025-08-13T00:01:42.778470514Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 00:01:42.778562 env[1583]: time="2025-08-13T00:01:42.778484916Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Aug 13 00:01:42.778562 env[1583]: time="2025-08-13T00:01:42.778494677Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 00:01:42.778600 env[1583]: time="2025-08-13T00:01:42.778568486Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:01:42.783477 env[1583]: time="2025-08-13T00:01:42.783434205Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:01:42.784972 env[1583]: time="2025-08-13T00:01:42.784924868Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:01:42.784972 env[1583]: time="2025-08-13T00:01:42.784966514Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 00:01:42.785080 env[1583]: time="2025-08-13T00:01:42.785051524Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Aug 13 00:01:42.785080 env[1583]: time="2025-08-13T00:01:42.785065286Z" level=info msg="metadata content store policy set" policy=shared Aug 13 00:01:42.807221 env[1583]: time="2025-08-13T00:01:42.807168687Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 00:01:42.807221 env[1583]: time="2025-08-13T00:01:42.807220453Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 00:01:42.807375 env[1583]: time="2025-08-13T00:01:42.807234215Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 00:01:42.807375 env[1583]: time="2025-08-13T00:01:42.807270380Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 00:01:42.807375 env[1583]: time="2025-08-13T00:01:42.807289222Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 00:01:42.807375 env[1583]: time="2025-08-13T00:01:42.807303704Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 00:01:42.807375 env[1583]: time="2025-08-13T00:01:42.807316225Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 00:01:42.807765 env[1583]: time="2025-08-13T00:01:42.807740837Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 00:01:42.807824 env[1583]: time="2025-08-13T00:01:42.807768081Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Aug 13 00:01:42.807824 env[1583]: time="2025-08-13T00:01:42.807785483Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 00:01:42.807824 env[1583]: time="2025-08-13T00:01:42.807798445Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 00:01:42.807824 env[1583]: time="2025-08-13T00:01:42.807811526Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 00:01:42.807975 env[1583]: time="2025-08-13T00:01:42.807952103Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 00:01:42.808075 env[1583]: time="2025-08-13T00:01:42.808052356Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 00:01:42.808426 env[1583]: time="2025-08-13T00:01:42.808397878Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 00:01:42.808473 env[1583]: time="2025-08-13T00:01:42.808431322Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 00:01:42.808473 env[1583]: time="2025-08-13T00:01:42.808446364Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 00:01:42.808519 env[1583]: time="2025-08-13T00:01:42.808492250Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 00:01:42.808519 env[1583]: time="2025-08-13T00:01:42.808505532Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 00:01:42.808560 env[1583]: time="2025-08-13T00:01:42.808517493Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 00:01:42.808560 env[1583]: time="2025-08-13T00:01:42.808530335Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 00:01:42.808560 env[1583]: time="2025-08-13T00:01:42.808544176Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 00:01:42.808560 env[1583]: time="2025-08-13T00:01:42.808556858Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 00:01:42.808636 env[1583]: time="2025-08-13T00:01:42.808570620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 00:01:42.808636 env[1583]: time="2025-08-13T00:01:42.808583261Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 00:01:42.808636 env[1583]: time="2025-08-13T00:01:42.808596783Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 00:01:42.808787 env[1583]: time="2025-08-13T00:01:42.808752242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 00:01:42.808822 env[1583]: time="2025-08-13T00:01:42.808787486Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 00:01:42.808822 env[1583]: time="2025-08-13T00:01:42.808801928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 00:01:42.808822 env[1583]: time="2025-08-13T00:01:42.808813850Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 00:01:42.808886 env[1583]: time="2025-08-13T00:01:42.808828411Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Aug 13 00:01:42.808886 env[1583]: time="2025-08-13T00:01:42.808840373Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 00:01:42.808886 env[1583]: time="2025-08-13T00:01:42.808859535Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Aug 13 00:01:42.808948 env[1583]: time="2025-08-13T00:01:42.808895180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 00:01:42.809153 env[1583]: time="2025-08-13T00:01:42.809096604Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 00:01:42.824486 env[1583]: time="2025-08-13T00:01:42.809157652Z" level=info msg="Connect containerd service" Aug 13 00:01:42.824486 env[1583]: time="2025-08-13T00:01:42.809194536Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 00:01:42.824486 env[1583]: time="2025-08-13T00:01:42.812471340Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:01:42.824486 env[1583]: time="2025-08-13T00:01:42.812796620Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 00:01:42.824486 env[1583]: time="2025-08-13T00:01:42.812838545Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 00:01:42.824486 env[1583]: time="2025-08-13T00:01:42.814737499Z" level=info msg="Start subscribing containerd event" Aug 13 00:01:42.824486 env[1583]: time="2025-08-13T00:01:42.814860914Z" level=info msg="Start recovering state" Aug 13 00:01:42.824486 env[1583]: time="2025-08-13T00:01:42.814945364Z" level=info msg="Start event monitor" Aug 13 00:01:42.824486 env[1583]: time="2025-08-13T00:01:42.814962927Z" level=info msg="Start snapshots syncer" Aug 13 00:01:42.824486 env[1583]: time="2025-08-13T00:01:42.814973008Z" level=info msg="Start cni network conf syncer for default" Aug 13 00:01:42.824486 env[1583]: time="2025-08-13T00:01:42.814990050Z" level=info msg="Start streaming server" Aug 13 00:01:42.812989 systemd[1]: Started containerd.service. Aug 13 00:01:42.832088 env[1583]: time="2025-08-13T00:01:42.831943577Z" level=info msg="containerd successfully booted in 0.183460s" Aug 13 00:01:42.872388 systemd[1]: nvidia.service: Deactivated successfully. Aug 13 00:01:43.111376 dbus-daemon[1551]: [system] SELinux support is enabled Aug 13 00:01:43.111597 systemd[1]: Started dbus.service. Aug 13 00:01:43.117507 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 00:01:43.117529 systemd[1]: Reached target system-config.target. Aug 13 00:01:43.126759 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 00:01:43.126788 systemd[1]: Reached target user-config.target. Aug 13 00:01:43.133941 systemd[1]: Started systemd-logind.service. Aug 13 00:01:43.319508 tar[1573]: linux-arm64/LICENSE Aug 13 00:01:43.319792 tar[1573]: linux-arm64/README.md Aug 13 00:01:43.326575 systemd[1]: Finished prepare-helm.service. Aug 13 00:01:43.338914 update_engine[1568]: I0813 00:01:43.326077 1568 main.cc:92] Flatcar Update Engine starting Aug 13 00:01:43.398200 systemd[1]: Started update-engine.service. Aug 13 00:01:43.402863 update_engine[1568]: I0813 00:01:43.398234 1568 update_check_scheduler.cc:74] Next update check in 2m51s Aug 13 00:01:43.406730 systemd[1]: Started locksmithd.service. Aug 13 00:01:43.656802 systemd[1]: Started kubelet.service. Aug 13 00:01:44.139513 kubelet[1674]: E0813 00:01:44.139408 1674 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:01:44.141319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:01:44.141457 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:01:44.211852 sshd_keygen[1581]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 00:01:44.230595 systemd[1]: Finished sshd-keygen.service. Aug 13 00:01:44.237067 systemd[1]: Starting issuegen.service... Aug 13 00:01:44.242393 systemd[1]: Started waagent.service. Aug 13 00:01:44.247334 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 00:01:44.247594 systemd[1]: Finished issuegen.service. Aug 13 00:01:44.253999 systemd[1]: Starting systemd-user-sessions.service... Aug 13 00:01:44.301382 systemd[1]: Finished systemd-user-sessions.service. Aug 13 00:01:44.308830 systemd[1]: Started getty@tty1.service. Aug 13 00:01:44.320288 systemd[1]: Started serial-getty@ttyAMA0.service. Aug 13 00:01:44.325905 systemd[1]: Reached target getty.target. Aug 13 00:01:44.330603 systemd[1]: Reached target multi-user.target. Aug 13 00:01:44.337856 systemd[1]: Starting systemd-update-utmp-runlevel.service... Aug 13 00:01:44.351274 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Aug 13 00:01:44.351652 systemd[1]: Finished systemd-update-utmp-runlevel.service. Aug 13 00:01:44.357421 systemd[1]: Startup finished in 16.260s (kernel) + 38.641s (userspace) = 54.902s. Aug 13 00:01:44.636137 locksmithd[1669]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 00:01:45.206112 login[1702]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Aug 13 00:01:45.229866 login[1701]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Aug 13 00:01:45.362540 systemd[1]: Created slice user-500.slice. Aug 13 00:01:45.363778 systemd[1]: Starting user-runtime-dir@500.service... Aug 13 00:01:45.366738 systemd-logind[1566]: New session 2 of user core. Aug 13 00:01:45.413707 systemd[1]: Finished user-runtime-dir@500.service. Aug 13 00:01:45.415373 systemd[1]: Starting user@500.service... Aug 13 00:01:45.476182 (systemd)[1709]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:01:45.859325 systemd[1709]: Queued start job for default target default.target. Aug 13 00:01:45.860390 systemd[1709]: Reached target paths.target. Aug 13 00:01:45.860520 systemd[1709]: Reached target sockets.target. Aug 13 00:01:45.860604 systemd[1709]: Reached target timers.target. Aug 13 00:01:45.860725 systemd[1709]: Reached target basic.target. Aug 13 00:01:45.860857 systemd[1709]: Reached target default.target. Aug 13 00:01:45.860959 systemd[1709]: Startup finished in 377ms. Aug 13 00:01:45.860960 systemd[1]: Started user@500.service. Aug 13 00:01:45.861997 systemd[1]: Started session-2.scope. Aug 13 00:01:46.207587 login[1702]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Aug 13 00:01:46.211435 systemd-logind[1566]: New session 1 of user core. Aug 13 00:01:46.212278 systemd[1]: Started session-1.scope. Aug 13 00:01:52.434238 waagent[1696]: 2025-08-13T00:01:52.434127Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Aug 13 00:01:52.453501 waagent[1696]: 2025-08-13T00:01:52.453400Z INFO Daemon Daemon OS: flatcar 3510.3.8 Aug 13 00:01:52.458762 waagent[1696]: 2025-08-13T00:01:52.458671Z INFO Daemon Daemon Python: 3.9.16 Aug 13 00:01:52.463788 waagent[1696]: 2025-08-13T00:01:52.463700Z INFO Daemon Daemon Run daemon Aug 13 00:01:52.468683 waagent[1696]: 2025-08-13T00:01:52.468579Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.8' Aug 13 00:01:52.502774 waagent[1696]: 2025-08-13T00:01:52.502597Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Aug 13 00:01:52.519593 waagent[1696]: 2025-08-13T00:01:52.519420Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Aug 13 00:01:52.530467 waagent[1696]: 2025-08-13T00:01:52.530372Z INFO Daemon Daemon cloud-init is enabled: False Aug 13 00:01:52.536110 waagent[1696]: 2025-08-13T00:01:52.536011Z INFO Daemon Daemon Using waagent for provisioning Aug 13 00:01:52.542415 waagent[1696]: 2025-08-13T00:01:52.542331Z INFO Daemon Daemon Activate resource disk Aug 13 00:01:52.547812 waagent[1696]: 2025-08-13T00:01:52.547718Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Aug 13 00:01:52.563270 waagent[1696]: 2025-08-13T00:01:52.563177Z INFO Daemon Daemon Found device: None Aug 13 00:01:52.568205 waagent[1696]: 2025-08-13T00:01:52.568117Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Aug 13 00:01:52.577573 waagent[1696]: 2025-08-13T00:01:52.577481Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Aug 13 00:01:52.590390 waagent[1696]: 2025-08-13T00:01:52.590309Z INFO Daemon Daemon Clean protocol and wireserver endpoint Aug 13 00:01:52.597709 waagent[1696]: 2025-08-13T00:01:52.597586Z INFO Daemon Daemon Running default provisioning handler Aug 13 00:01:52.615961 waagent[1696]: 2025-08-13T00:01:52.615797Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Aug 13 00:01:52.632890 waagent[1696]: 2025-08-13T00:01:52.632742Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Aug 13 00:01:52.643499 waagent[1696]: 2025-08-13T00:01:52.643399Z INFO Daemon Daemon cloud-init is enabled: False Aug 13 00:01:52.649246 waagent[1696]: 2025-08-13T00:01:52.649131Z INFO Daemon Daemon Copying ovf-env.xml Aug 13 00:01:52.777321 waagent[1696]: 2025-08-13T00:01:52.777113Z INFO Daemon Daemon Successfully mounted dvd Aug 13 00:01:52.864133 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Aug 13 00:01:52.911407 waagent[1696]: 2025-08-13T00:01:52.911263Z INFO Daemon Daemon Detect protocol endpoint Aug 13 00:01:52.917017 waagent[1696]: 2025-08-13T00:01:52.916917Z INFO Daemon Daemon Clean protocol and wireserver endpoint Aug 13 00:01:52.923508 waagent[1696]: 2025-08-13T00:01:52.923412Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Aug 13 00:01:52.930877 waagent[1696]: 2025-08-13T00:01:52.930785Z INFO Daemon Daemon Test for route to 168.63.129.16 Aug 13 00:01:52.936899 waagent[1696]: 2025-08-13T00:01:52.936814Z INFO Daemon Daemon Route to 168.63.129.16 exists Aug 13 00:01:52.942719 waagent[1696]: 2025-08-13T00:01:52.942616Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Aug 13 00:01:53.090870 waagent[1696]: 2025-08-13T00:01:53.090733Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Aug 13 00:01:53.099066 waagent[1696]: 2025-08-13T00:01:53.099017Z INFO Daemon Daemon Wire protocol version:2012-11-30 Aug 13 00:01:53.105880 waagent[1696]: 2025-08-13T00:01:53.105754Z INFO Daemon Daemon Server preferred version:2015-04-05 Aug 13 00:01:53.875491 waagent[1696]: 2025-08-13T00:01:53.875280Z INFO Daemon Daemon Initializing goal state during protocol detection Aug 13 00:01:53.925561 waagent[1696]: 2025-08-13T00:01:53.925461Z INFO Daemon Daemon Forcing an update of the goal state.. Aug 13 00:01:53.931875 waagent[1696]: 2025-08-13T00:01:53.931780Z INFO Daemon Daemon Fetching goal state [incarnation 1] Aug 13 00:01:54.031523 waagent[1696]: 2025-08-13T00:01:54.031369Z INFO Daemon Daemon Found private key matching thumbprint 542A17A50D5069BEEAC63305D1CE103574612148 Aug 13 00:01:54.042730 waagent[1696]: 2025-08-13T00:01:54.042531Z INFO Daemon Daemon Certificate with thumbprint 7F62010F92251816187A96723BAF08CF598A0E38 has no matching private key. Aug 13 00:01:54.053397 waagent[1696]: 2025-08-13T00:01:54.053296Z INFO Daemon Daemon Fetch goal state completed Aug 13 00:01:54.094919 waagent[1696]: 2025-08-13T00:01:54.094848Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: d0b4369e-4f6d-48dc-a472-1f8b5afa906b New eTag: 12644639573462578058] Aug 13 00:01:54.106672 waagent[1696]: 2025-08-13T00:01:54.106571Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Aug 13 00:01:54.124982 waagent[1696]: 2025-08-13T00:01:54.124891Z INFO Daemon Daemon Starting provisioning Aug 13 00:01:54.130399 waagent[1696]: 2025-08-13T00:01:54.130280Z INFO Daemon Daemon Handle ovf-env.xml. Aug 13 00:01:54.135298 waagent[1696]: 2025-08-13T00:01:54.135224Z INFO Daemon Daemon Set hostname [ci-3510.3.8-a-dd293077f6] Aug 13 00:01:54.160336 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 00:01:54.160522 systemd[1]: Stopped kubelet.service. Aug 13 00:01:54.162085 systemd[1]: Starting kubelet.service... Aug 13 00:01:54.194193 waagent[1696]: 2025-08-13T00:01:54.194050Z INFO Daemon Daemon Publish hostname [ci-3510.3.8-a-dd293077f6] Aug 13 00:01:54.202379 waagent[1696]: 2025-08-13T00:01:54.202267Z INFO Daemon Daemon Examine /proc/net/route for primary interface Aug 13 00:01:54.210522 waagent[1696]: 2025-08-13T00:01:54.210429Z INFO Daemon Daemon Primary interface is [eth0] Aug 13 00:01:54.230015 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Aug 13 00:01:54.230240 systemd[1]: Stopped systemd-networkd-wait-online.service. Aug 13 00:01:54.230308 systemd[1]: Stopping systemd-networkd-wait-online.service... Aug 13 00:01:54.230496 systemd[1]: Stopping systemd-networkd.service... Aug 13 00:01:54.236733 systemd-networkd[1296]: eth0: DHCPv6 lease lost Aug 13 00:01:54.239028 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 00:01:54.239291 systemd[1]: Stopped systemd-networkd.service. Aug 13 00:01:54.241309 systemd[1]: Starting systemd-networkd.service... Aug 13 00:01:54.272907 systemd[1]: Started kubelet.service. Aug 13 00:01:54.284492 systemd-networkd[1759]: enP28011s1: Link UP Aug 13 00:01:54.284505 systemd-networkd[1759]: enP28011s1: Gained carrier Aug 13 00:01:54.285547 systemd-networkd[1759]: eth0: Link UP Aug 13 00:01:54.285550 systemd-networkd[1759]: eth0: Gained carrier Aug 13 00:01:54.285927 systemd-networkd[1759]: lo: Link UP Aug 13 00:01:54.285929 systemd-networkd[1759]: lo: Gained carrier Aug 13 00:01:54.286174 systemd-networkd[1759]: eth0: Gained IPv6LL Aug 13 00:01:54.286386 systemd-networkd[1759]: Enumeration completed Aug 13 00:01:54.286500 systemd[1]: Started systemd-networkd.service. Aug 13 00:01:54.288432 systemd[1]: Starting systemd-networkd-wait-online.service... Aug 13 00:01:54.289894 systemd-networkd[1759]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:01:54.292881 waagent[1696]: 2025-08-13T00:01:54.292094Z INFO Daemon Daemon Create user account if not exists Aug 13 00:01:54.299236 waagent[1696]: 2025-08-13T00:01:54.298892Z INFO Daemon Daemon User core already exists, skip useradd Aug 13 00:01:54.305427 waagent[1696]: 2025-08-13T00:01:54.305325Z INFO Daemon Daemon Configure sudoer Aug 13 00:01:54.320757 systemd-networkd[1759]: eth0: DHCPv4 address 10.200.20.35/24, gateway 10.200.20.1 acquired from 168.63.129.16 Aug 13 00:01:54.330768 systemd[1]: Finished systemd-networkd-wait-online.service. Aug 13 00:01:54.399269 kubelet[1765]: E0813 00:01:54.399140 1765 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:01:54.404448 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:01:54.404824 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:01:54.672301 waagent[1696]: 2025-08-13T00:01:54.672192Z INFO Daemon Daemon Configure sshd Aug 13 00:01:54.677153 waagent[1696]: 2025-08-13T00:01:54.677067Z INFO Daemon Daemon Deploy ssh public key. Aug 13 00:01:55.873992 waagent[1696]: 2025-08-13T00:01:55.873858Z INFO Daemon Daemon Provisioning complete Aug 13 00:01:55.895621 waagent[1696]: 2025-08-13T00:01:55.895538Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Aug 13 00:01:55.902318 waagent[1696]: 2025-08-13T00:01:55.902220Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Aug 13 00:01:55.914379 waagent[1696]: 2025-08-13T00:01:55.914265Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Aug 13 00:01:56.236346 waagent[1781]: 2025-08-13T00:01:56.236239Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Aug 13 00:01:56.237544 waagent[1781]: 2025-08-13T00:01:56.237477Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Aug 13 00:01:56.237849 waagent[1781]: 2025-08-13T00:01:56.237794Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Aug 13 00:01:56.253299 waagent[1781]: 2025-08-13T00:01:56.253194Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Aug 13 00:01:56.253695 waagent[1781]: 2025-08-13T00:01:56.253589Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Aug 13 00:01:56.329948 waagent[1781]: 2025-08-13T00:01:56.329810Z INFO ExtHandler ExtHandler Found private key matching thumbprint 542A17A50D5069BEEAC63305D1CE103574612148 Aug 13 00:01:56.330321 waagent[1781]: 2025-08-13T00:01:56.330266Z INFO ExtHandler ExtHandler Certificate with thumbprint 7F62010F92251816187A96723BAF08CF598A0E38 has no matching private key. Aug 13 00:01:56.330722 waagent[1781]: 2025-08-13T00:01:56.330605Z INFO ExtHandler ExtHandler Fetch goal state completed Aug 13 00:01:56.345392 waagent[1781]: 2025-08-13T00:01:56.345331Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 33989493-7d08-461b-9140-7855c405f117 New eTag: 12644639573462578058] Aug 13 00:01:56.346183 waagent[1781]: 2025-08-13T00:01:56.346117Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Aug 13 00:01:56.452809 waagent[1781]: 2025-08-13T00:01:56.452604Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.8; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Aug 13 00:01:56.496968 waagent[1781]: 2025-08-13T00:01:56.496786Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1781 Aug 13 00:01:56.501151 waagent[1781]: 2025-08-13T00:01:56.501050Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.8', '', 'Flatcar Container Linux by Kinvolk'] Aug 13 00:01:56.502641 waagent[1781]: 2025-08-13T00:01:56.502539Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Aug 13 00:01:56.637184 waagent[1781]: 2025-08-13T00:01:56.637105Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Aug 13 00:01:56.637651 waagent[1781]: 2025-08-13T00:01:56.637586Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Aug 13 00:01:56.646783 waagent[1781]: 2025-08-13T00:01:56.646699Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Aug 13 00:01:56.647456 waagent[1781]: 2025-08-13T00:01:56.647379Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Aug 13 00:01:56.648836 waagent[1781]: 2025-08-13T00:01:56.648756Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Aug 13 00:01:56.650571 waagent[1781]: 2025-08-13T00:01:56.650470Z INFO ExtHandler ExtHandler Starting env monitor service. Aug 13 00:01:56.651401 waagent[1781]: 2025-08-13T00:01:56.651329Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Aug 13 00:01:56.651716 waagent[1781]: 2025-08-13T00:01:56.651628Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Aug 13 00:01:56.652442 waagent[1781]: 2025-08-13T00:01:56.652375Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Aug 13 00:01:56.652943 waagent[1781]: 2025-08-13T00:01:56.652873Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Aug 13 00:01:56.652943 waagent[1781]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Aug 13 00:01:56.652943 waagent[1781]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Aug 13 00:01:56.652943 waagent[1781]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Aug 13 00:01:56.652943 waagent[1781]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Aug 13 00:01:56.652943 waagent[1781]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Aug 13 00:01:56.652943 waagent[1781]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Aug 13 00:01:56.656092 waagent[1781]: 2025-08-13T00:01:56.655918Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Aug 13 00:01:56.656413 waagent[1781]: 2025-08-13T00:01:56.656345Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Aug 13 00:01:56.657103 waagent[1781]: 2025-08-13T00:01:56.657027Z INFO EnvHandler ExtHandler Configure routes Aug 13 00:01:56.657266 waagent[1781]: 2025-08-13T00:01:56.657216Z INFO EnvHandler ExtHandler Gateway:None Aug 13 00:01:56.657384 waagent[1781]: 2025-08-13T00:01:56.657340Z INFO EnvHandler ExtHandler Routes:None Aug 13 00:01:56.658046 waagent[1781]: 2025-08-13T00:01:56.657949Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Aug 13 00:01:56.659166 waagent[1781]: 2025-08-13T00:01:56.659097Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Aug 13 00:01:56.659364 waagent[1781]: 2025-08-13T00:01:56.659278Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Aug 13 00:01:56.660161 waagent[1781]: 2025-08-13T00:01:56.660078Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Aug 13 00:01:56.660355 waagent[1781]: 2025-08-13T00:01:56.660276Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Aug 13 00:01:56.660640 waagent[1781]: 2025-08-13T00:01:56.660574Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Aug 13 00:01:56.674257 waagent[1781]: 2025-08-13T00:01:56.674171Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Aug 13 00:01:56.675157 waagent[1781]: 2025-08-13T00:01:56.675100Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Aug 13 00:01:56.676283 waagent[1781]: 2025-08-13T00:01:56.676215Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Aug 13 00:01:56.712135 waagent[1781]: 2025-08-13T00:01:56.711963Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1759' Aug 13 00:01:56.743481 waagent[1781]: 2025-08-13T00:01:56.743408Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Aug 13 00:01:56.804413 waagent[1781]: 2025-08-13T00:01:56.804183Z INFO MonitorHandler ExtHandler Network interfaces: Aug 13 00:01:56.804413 waagent[1781]: Executing ['ip', '-a', '-o', 'link']: Aug 13 00:01:56.804413 waagent[1781]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Aug 13 00:01:56.804413 waagent[1781]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7e:70:c0 brd ff:ff:ff:ff:ff:ff Aug 13 00:01:56.804413 waagent[1781]: 3: enP28011s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7e:70:c0 brd ff:ff:ff:ff:ff:ff\ altname enP28011p0s2 Aug 13 00:01:56.804413 waagent[1781]: Executing ['ip', '-4', '-a', '-o', 'address']: Aug 13 00:01:56.804413 waagent[1781]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Aug 13 00:01:56.804413 waagent[1781]: 2: eth0 inet 10.200.20.35/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Aug 13 00:01:56.804413 waagent[1781]: Executing ['ip', '-6', '-a', '-o', 'address']: Aug 13 00:01:56.804413 waagent[1781]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Aug 13 00:01:56.804413 waagent[1781]: 2: eth0 inet6 fe80::222:48ff:fe7e:70c0/64 scope link \ valid_lft forever preferred_lft forever Aug 13 00:01:57.076585 waagent[1781]: 2025-08-13T00:01:57.076470Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.14.0.1 -- exiting Aug 13 00:01:57.920396 waagent[1696]: 2025-08-13T00:01:57.920239Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Aug 13 00:01:57.926515 waagent[1696]: 2025-08-13T00:01:57.926415Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.14.0.1 to be the latest agent Aug 13 00:01:59.295301 waagent[1813]: 2025-08-13T00:01:59.295191Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.14.0.1) Aug 13 00:01:59.296425 waagent[1813]: 2025-08-13T00:01:59.296360Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.8 Aug 13 00:01:59.296735 waagent[1813]: 2025-08-13T00:01:59.296654Z INFO ExtHandler ExtHandler Python: 3.9.16 Aug 13 00:01:59.296980 waagent[1813]: 2025-08-13T00:01:59.296933Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Aug 13 00:01:59.311532 waagent[1813]: 2025-08-13T00:01:59.311408Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.8; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; Arch: aarch64; systemd: True; systemd_version: systemd 252 (252); LISDrivers: Absent; logrotate: logrotate 3.20.1; Aug 13 00:01:59.312230 waagent[1813]: 2025-08-13T00:01:59.312169Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Aug 13 00:01:59.312529 waagent[1813]: 2025-08-13T00:01:59.312479Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Aug 13 00:01:59.312878 waagent[1813]: 2025-08-13T00:01:59.312825Z INFO ExtHandler ExtHandler Initializing the goal state... Aug 13 00:01:59.327679 waagent[1813]: 2025-08-13T00:01:59.327581Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Aug 13 00:01:59.341191 waagent[1813]: 2025-08-13T00:01:59.341126Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Aug 13 00:01:59.342334 waagent[1813]: 2025-08-13T00:01:59.342273Z INFO ExtHandler Aug 13 00:01:59.342507 waagent[1813]: 2025-08-13T00:01:59.342458Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: d80e40f1-1676-4b31-ad89-981ebcfefe01 eTag: 12644639573462578058 source: Fabric] Aug 13 00:01:59.343318 waagent[1813]: 2025-08-13T00:01:59.343260Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Aug 13 00:01:59.344606 waagent[1813]: 2025-08-13T00:01:59.344544Z INFO ExtHandler Aug 13 00:01:59.344792 waagent[1813]: 2025-08-13T00:01:59.344743Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Aug 13 00:01:59.352008 waagent[1813]: 2025-08-13T00:01:59.351957Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Aug 13 00:01:59.352564 waagent[1813]: 2025-08-13T00:01:59.352515Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Aug 13 00:01:59.376412 waagent[1813]: 2025-08-13T00:01:59.376349Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Aug 13 00:01:59.460303 waagent[1813]: 2025-08-13T00:01:59.460166Z INFO ExtHandler Downloaded certificate {'thumbprint': '7F62010F92251816187A96723BAF08CF598A0E38', 'hasPrivateKey': False} Aug 13 00:01:59.461509 waagent[1813]: 2025-08-13T00:01:59.461443Z INFO ExtHandler Downloaded certificate {'thumbprint': '542A17A50D5069BEEAC63305D1CE103574612148', 'hasPrivateKey': True} Aug 13 00:01:59.462755 waagent[1813]: 2025-08-13T00:01:59.462633Z INFO ExtHandler Fetch goal state from WireServer completed Aug 13 00:01:59.463731 waagent[1813]: 2025-08-13T00:01:59.463642Z INFO ExtHandler ExtHandler Goal state initialization completed. Aug 13 00:01:59.485937 waagent[1813]: 2025-08-13T00:01:59.485771Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.0.15 3 Sep 2024 (Library: OpenSSL 3.0.15 3 Sep 2024) Aug 13 00:01:59.494871 waagent[1813]: 2025-08-13T00:01:59.494746Z INFO ExtHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules Aug 13 00:01:59.498763 waagent[1813]: 2025-08-13T00:01:59.498615Z INFO ExtHandler ExtHandler Did not find a legacy firewall rule: ['iptables', '-w', '-t', 'security', '-C', 'OUTPUT', '-d', '168.63.129.16', '-p', 'tcp', '-m', 'conntrack', '--ctstate', 'INVALID,NEW', '-j', 'ACCEPT'] Aug 13 00:01:59.499022 waagent[1813]: 2025-08-13T00:01:59.498969Z INFO ExtHandler ExtHandler Checking state of the firewall Aug 13 00:01:59.673723 waagent[1813]: 2025-08-13T00:01:59.673566Z INFO ExtHandler ExtHandler Created firewall rules for Azure Fabric: Aug 13 00:01:59.673723 waagent[1813]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Aug 13 00:01:59.673723 waagent[1813]: pkts bytes target prot opt in out source destination Aug 13 00:01:59.673723 waagent[1813]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Aug 13 00:01:59.673723 waagent[1813]: pkts bytes target prot opt in out source destination Aug 13 00:01:59.673723 waagent[1813]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Aug 13 00:01:59.673723 waagent[1813]: pkts bytes target prot opt in out source destination Aug 13 00:01:59.673723 waagent[1813]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Aug 13 00:01:59.673723 waagent[1813]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Aug 13 00:01:59.673723 waagent[1813]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Aug 13 00:01:59.675351 waagent[1813]: 2025-08-13T00:01:59.675287Z INFO ExtHandler ExtHandler Setting up persistent firewall rules Aug 13 00:01:59.678733 waagent[1813]: 2025-08-13T00:01:59.678584Z INFO ExtHandler ExtHandler The firewalld service is not present on the system Aug 13 00:01:59.679193 waagent[1813]: 2025-08-13T00:01:59.679140Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Aug 13 00:01:59.679727 waagent[1813]: 2025-08-13T00:01:59.679645Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Aug 13 00:01:59.688353 waagent[1813]: 2025-08-13T00:01:59.688291Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Aug 13 00:01:59.689238 waagent[1813]: 2025-08-13T00:01:59.689178Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Aug 13 00:01:59.698264 waagent[1813]: 2025-08-13T00:01:59.698177Z INFO ExtHandler ExtHandler WALinuxAgent-2.14.0.1 running as process 1813 Aug 13 00:01:59.702125 waagent[1813]: 2025-08-13T00:01:59.702034Z INFO ExtHandler ExtHandler [CGI] Cgroups is not currently supported on ['flatcar', '3510.3.8', '', 'Flatcar Container Linux by Kinvolk'] Aug 13 00:01:59.703239 waagent[1813]: 2025-08-13T00:01:59.703175Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case cgroup usage went from enabled to disabled Aug 13 00:01:59.704359 waagent[1813]: 2025-08-13T00:01:59.704299Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Aug 13 00:01:59.707498 waagent[1813]: 2025-08-13T00:01:59.707437Z INFO ExtHandler ExtHandler Signing certificate written to /var/lib/waagent/microsoft_root_certificate.pem Aug 13 00:01:59.708028 waagent[1813]: 2025-08-13T00:01:59.707970Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Aug 13 00:01:59.709629 waagent[1813]: 2025-08-13T00:01:59.709560Z INFO ExtHandler ExtHandler Starting env monitor service. Aug 13 00:01:59.710064 waagent[1813]: 2025-08-13T00:01:59.709991Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Aug 13 00:01:59.710588 waagent[1813]: 2025-08-13T00:01:59.710521Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Aug 13 00:01:59.711272 waagent[1813]: 2025-08-13T00:01:59.711211Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Aug 13 00:01:59.711702 waagent[1813]: 2025-08-13T00:01:59.711618Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Aug 13 00:01:59.711702 waagent[1813]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Aug 13 00:01:59.711702 waagent[1813]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Aug 13 00:01:59.711702 waagent[1813]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Aug 13 00:01:59.711702 waagent[1813]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Aug 13 00:01:59.711702 waagent[1813]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Aug 13 00:01:59.711702 waagent[1813]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Aug 13 00:01:59.714418 waagent[1813]: 2025-08-13T00:01:59.714312Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Aug 13 00:01:59.715029 waagent[1813]: 2025-08-13T00:01:59.714964Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Aug 13 00:01:59.716517 waagent[1813]: 2025-08-13T00:01:59.716447Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Aug 13 00:01:59.718365 waagent[1813]: 2025-08-13T00:01:59.718207Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Aug 13 00:01:59.719748 waagent[1813]: 2025-08-13T00:01:59.719668Z INFO EnvHandler ExtHandler Configure routes Aug 13 00:01:59.719992 waagent[1813]: 2025-08-13T00:01:59.719937Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Aug 13 00:01:59.721135 waagent[1813]: 2025-08-13T00:01:59.721042Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Aug 13 00:01:59.721507 waagent[1813]: 2025-08-13T00:01:59.721448Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Aug 13 00:01:59.721838 waagent[1813]: 2025-08-13T00:01:59.721775Z INFO EnvHandler ExtHandler Gateway:None Aug 13 00:01:59.722204 waagent[1813]: 2025-08-13T00:01:59.722133Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Aug 13 00:01:59.726386 waagent[1813]: 2025-08-13T00:01:59.726311Z INFO EnvHandler ExtHandler Routes:None Aug 13 00:01:59.736293 waagent[1813]: 2025-08-13T00:01:59.736183Z INFO MonitorHandler ExtHandler Network interfaces: Aug 13 00:01:59.736293 waagent[1813]: Executing ['ip', '-a', '-o', 'link']: Aug 13 00:01:59.736293 waagent[1813]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Aug 13 00:01:59.736293 waagent[1813]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7e:70:c0 brd ff:ff:ff:ff:ff:ff Aug 13 00:01:59.736293 waagent[1813]: 3: enP28011s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7e:70:c0 brd ff:ff:ff:ff:ff:ff\ altname enP28011p0s2 Aug 13 00:01:59.736293 waagent[1813]: Executing ['ip', '-4', '-a', '-o', 'address']: Aug 13 00:01:59.736293 waagent[1813]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Aug 13 00:01:59.736293 waagent[1813]: 2: eth0 inet 10.200.20.35/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Aug 13 00:01:59.736293 waagent[1813]: Executing ['ip', '-6', '-a', '-o', 'address']: Aug 13 00:01:59.736293 waagent[1813]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Aug 13 00:01:59.736293 waagent[1813]: 2: eth0 inet6 fe80::222:48ff:fe7e:70c0/64 scope link \ valid_lft forever preferred_lft forever Aug 13 00:01:59.747981 waagent[1813]: 2025-08-13T00:01:59.747889Z INFO ExtHandler ExtHandler Downloading agent manifest Aug 13 00:01:59.763602 waagent[1813]: 2025-08-13T00:01:59.763507Z INFO ExtHandler ExtHandler Aug 13 00:01:59.765459 waagent[1813]: 2025-08-13T00:01:59.765353Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: ef50bed1-7495-40d6-a861-8ed840f5576b correlation a27ff9a4-c15f-4a0d-92d0-1356ab3e8061 created: 2025-08-13T00:00:05.356926Z] Aug 13 00:01:59.768697 waagent[1813]: 2025-08-13T00:01:59.768596Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Aug 13 00:01:59.773095 waagent[1813]: 2025-08-13T00:01:59.773017Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 9 ms] Aug 13 00:01:59.788639 waagent[1813]: 2025-08-13T00:01:59.788564Z INFO EnvHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules Aug 13 00:01:59.803761 waagent[1813]: 2025-08-13T00:01:59.803646Z INFO ExtHandler ExtHandler Looking for existing remote access users. Aug 13 00:01:59.807542 waagent[1813]: 2025-08-13T00:01:59.807472Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Aug 13 00:01:59.810794 waagent[1813]: 2025-08-13T00:01:59.810720Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.14.0.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 324BEA0B-042E-46EC-B9C2-3112078AF300;UpdateGSErrors: 0;AutoUpdate: 1;UpdateMode: SelfUpdate;] Aug 13 00:02:04.410396 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 00:02:04.410560 systemd[1]: Stopped kubelet.service. Aug 13 00:02:04.412113 systemd[1]: Starting kubelet.service... Aug 13 00:02:04.509115 systemd[1]: Started kubelet.service. Aug 13 00:02:04.645360 kubelet[1865]: E0813 00:02:04.645309 1865 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:02:04.647264 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:02:04.647405 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:02:12.602762 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Aug 13 00:02:14.660426 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Aug 13 00:02:14.660590 systemd[1]: Stopped kubelet.service. Aug 13 00:02:14.662116 systemd[1]: Starting kubelet.service... Aug 13 00:02:14.752967 systemd[1]: Started kubelet.service. Aug 13 00:02:14.875110 kubelet[1880]: E0813 00:02:14.875070 1880 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:02:14.876894 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:02:14.877027 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:02:22.305947 systemd[1]: Created slice system-sshd.slice. Aug 13 00:02:22.307163 systemd[1]: Started sshd@0-10.200.20.35:22-10.200.16.10:42912.service. Aug 13 00:02:22.988908 sshd[1886]: Accepted publickey for core from 10.200.16.10 port 42912 ssh2: RSA SHA256:J/je3NSfm2Jr+TQ4JtJfZPKSEEtI0uL9aC1/9TbPR4M Aug 13 00:02:23.003839 sshd[1886]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:02:23.008828 systemd[1]: Started session-3.scope. Aug 13 00:02:23.009192 systemd-logind[1566]: New session 3 of user core. Aug 13 00:02:23.414093 systemd[1]: Started sshd@1-10.200.20.35:22-10.200.16.10:42928.service. Aug 13 00:02:23.883075 sshd[1891]: Accepted publickey for core from 10.200.16.10 port 42928 ssh2: RSA SHA256:J/je3NSfm2Jr+TQ4JtJfZPKSEEtI0uL9aC1/9TbPR4M Aug 13 00:02:23.885200 sshd[1891]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:02:23.889644 systemd-logind[1566]: New session 4 of user core. Aug 13 00:02:23.890129 systemd[1]: Started session-4.scope. Aug 13 00:02:24.220633 sshd[1891]: pam_unix(sshd:session): session closed for user core Aug 13 00:02:24.223563 systemd[1]: sshd@1-10.200.20.35:22-10.200.16.10:42928.service: Deactivated successfully. Aug 13 00:02:24.224517 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 00:02:24.224553 systemd-logind[1566]: Session 4 logged out. Waiting for processes to exit. Aug 13 00:02:24.225717 systemd-logind[1566]: Removed session 4. Aug 13 00:02:24.297395 systemd[1]: Started sshd@2-10.200.20.35:22-10.200.16.10:42942.service. Aug 13 00:02:24.766714 sshd[1898]: Accepted publickey for core from 10.200.16.10 port 42942 ssh2: RSA SHA256:J/je3NSfm2Jr+TQ4JtJfZPKSEEtI0uL9aC1/9TbPR4M Aug 13 00:02:24.768393 sshd[1898]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:02:24.772316 systemd-logind[1566]: New session 5 of user core. Aug 13 00:02:24.772759 systemd[1]: Started session-5.scope. Aug 13 00:02:24.911688 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Aug 13 00:02:24.911869 systemd[1]: Stopped kubelet.service. Aug 13 00:02:24.913347 systemd[1]: Starting kubelet.service... Aug 13 00:02:25.004650 systemd[1]: Started kubelet.service. Aug 13 00:02:25.115722 systemd[1]: sshd@2-10.200.20.35:22-10.200.16.10:42942.service: Deactivated successfully. Aug 13 00:02:25.112782 sshd[1898]: pam_unix(sshd:session): session closed for user core Aug 13 00:02:25.116726 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 00:02:25.116739 systemd-logind[1566]: Session 5 logged out. Waiting for processes to exit. Aug 13 00:02:25.118235 systemd-logind[1566]: Removed session 5. Aug 13 00:02:25.133328 kubelet[1913]: E0813 00:02:25.133289 1913 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:02:25.135128 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:02:25.135273 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:02:25.189752 systemd[1]: Started sshd@3-10.200.20.35:22-10.200.16.10:42946.service. Aug 13 00:02:25.659958 sshd[1923]: Accepted publickey for core from 10.200.16.10 port 42946 ssh2: RSA SHA256:J/je3NSfm2Jr+TQ4JtJfZPKSEEtI0uL9aC1/9TbPR4M Aug 13 00:02:25.661295 sshd[1923]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:02:25.666018 systemd[1]: Started session-6.scope. Aug 13 00:02:25.666334 systemd-logind[1566]: New session 6 of user core. Aug 13 00:02:26.012212 sshd[1923]: pam_unix(sshd:session): session closed for user core Aug 13 00:02:26.014728 systemd[1]: sshd@3-10.200.20.35:22-10.200.16.10:42946.service: Deactivated successfully. Aug 13 00:02:26.015428 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 00:02:26.016428 systemd-logind[1566]: Session 6 logged out. Waiting for processes to exit. Aug 13 00:02:26.017509 systemd-logind[1566]: Removed session 6. Aug 13 00:02:26.087077 systemd[1]: Started sshd@4-10.200.20.35:22-10.200.16.10:42958.service. Aug 13 00:02:26.555510 sshd[1930]: Accepted publickey for core from 10.200.16.10 port 42958 ssh2: RSA SHA256:J/je3NSfm2Jr+TQ4JtJfZPKSEEtI0uL9aC1/9TbPR4M Aug 13 00:02:26.557203 sshd[1930]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:02:26.561677 systemd[1]: Started session-7.scope. Aug 13 00:02:26.562170 systemd-logind[1566]: New session 7 of user core. Aug 13 00:02:27.166115 sudo[1934]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 00:02:27.166733 sudo[1934]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 13 00:02:27.212875 dbus-daemon[1551]: avc: received setenforce notice (enforcing=1) Aug 13 00:02:27.214797 sudo[1934]: pam_unix(sudo:session): session closed for user root Aug 13 00:02:27.299478 sshd[1930]: pam_unix(sshd:session): session closed for user core Aug 13 00:02:27.302856 systemd-logind[1566]: Session 7 logged out. Waiting for processes to exit. Aug 13 00:02:27.303049 systemd[1]: sshd@4-10.200.20.35:22-10.200.16.10:42958.service: Deactivated successfully. Aug 13 00:02:27.303861 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 00:02:27.304568 systemd-logind[1566]: Removed session 7. Aug 13 00:02:27.376216 systemd[1]: Started sshd@5-10.200.20.35:22-10.200.16.10:42964.service. Aug 13 00:02:27.844907 sshd[1938]: Accepted publickey for core from 10.200.16.10 port 42964 ssh2: RSA SHA256:J/je3NSfm2Jr+TQ4JtJfZPKSEEtI0uL9aC1/9TbPR4M Aug 13 00:02:27.846639 sshd[1938]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:02:27.850520 systemd-logind[1566]: New session 8 of user core. Aug 13 00:02:27.850995 systemd[1]: Started session-8.scope. Aug 13 00:02:28.112056 sudo[1943]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 00:02:28.112844 sudo[1943]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 13 00:02:28.115977 sudo[1943]: pam_unix(sudo:session): session closed for user root Aug 13 00:02:28.121295 sudo[1942]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 13 00:02:28.121520 sudo[1942]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 13 00:02:28.130493 systemd[1]: Stopping audit-rules.service... Aug 13 00:02:28.131000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Aug 13 00:02:28.131000 audit[1946]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffceb20c80 a2=420 a3=0 items=0 ppid=1 pid=1946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:02:28.143212 auditctl[1946]: No rules Aug 13 00:02:28.143721 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 00:02:28.143979 systemd[1]: Stopped audit-rules.service. Aug 13 00:02:28.146039 systemd[1]: Starting audit-rules.service... Aug 13 00:02:28.166332 kernel: audit: type=1305 audit(1755043348.131:167): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Aug 13 00:02:28.166437 kernel: audit: type=1300 audit(1755043348.131:167): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffceb20c80 a2=420 a3=0 items=0 ppid=1 pid=1946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:02:28.131000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Aug 13 00:02:28.174959 kernel: audit: type=1327 audit(1755043348.131:167): proctitle=2F7362696E2F617564697463746C002D44 Aug 13 00:02:28.142000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:02:28.194501 kernel: audit: type=1131 audit(1755043348.142:168): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:02:28.199377 augenrules[1964]: No rules Aug 13 00:02:28.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:02:28.200430 systemd[1]: Finished audit-rules.service. Aug 13 00:02:28.218013 sudo[1942]: pam_unix(sudo:session): session closed for user root Aug 13 00:02:28.217000 audit[1942]: USER_END pid=1942 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 00:02:28.238570 kernel: audit: type=1130 audit(1755043348.199:169): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:02:28.238683 kernel: audit: type=1106 audit(1755043348.217:170): pid=1942 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 00:02:28.217000 audit[1942]: CRED_DISP pid=1942 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 00:02:28.256777 kernel: audit: type=1104 audit(1755043348.217:171): pid=1942 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 00:02:28.288890 sshd[1938]: pam_unix(sshd:session): session closed for user core Aug 13 00:02:28.289000 audit[1938]: USER_END pid=1938 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:02:28.294071 systemd[1]: sshd@5-10.200.20.35:22-10.200.16.10:42964.service: Deactivated successfully. Aug 13 00:02:28.294882 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 00:02:28.289000 audit[1938]: CRED_DISP pid=1938 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:02:28.334694 kernel: audit: type=1106 audit(1755043348.289:172): pid=1938 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:02:28.334785 kernel: audit: type=1104 audit(1755043348.289:173): pid=1938 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:02:28.334605 systemd-logind[1566]: Session 8 logged out. Waiting for processes to exit. Aug 13 00:02:28.293000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.20.35:22-10.200.16.10:42964 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:02:28.353596 kernel: audit: type=1131 audit(1755043348.293:174): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.20.35:22-10.200.16.10:42964 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:02:28.354002 systemd-logind[1566]: Removed session 8. Aug 13 00:02:28.365259 systemd[1]: Started sshd@6-10.200.20.35:22-10.200.16.10:42974.service. Aug 13 00:02:28.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.35:22-10.200.16.10:42974 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:02:28.455573 update_engine[1568]: I0813 00:02:28.455201 1568 update_attempter.cc:509] Updating boot flags... Aug 13 00:02:28.834000 audit[1971]: USER_ACCT pid=1971 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:02:28.836142 sshd[1971]: Accepted publickey for core from 10.200.16.10 port 42974 ssh2: RSA SHA256:J/je3NSfm2Jr+TQ4JtJfZPKSEEtI0uL9aC1/9TbPR4M Aug 13 00:02:28.835000 audit[1971]: CRED_ACQ pid=1971 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:02:28.836000 audit[1971]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff882fad0 a2=3 a3=1 items=0 ppid=1 pid=1971 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:02:28.836000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:02:28.837179 sshd[1971]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:02:28.841731 systemd[1]: Started session-9.scope. Aug 13 00:02:28.842204 systemd-logind[1566]: New session 9 of user core. Aug 13 00:02:28.846000 audit[1971]: USER_START pid=1971 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:02:28.847000 audit[2013]: CRED_ACQ pid=2013 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:02:29.100000 audit[2014]: USER_ACCT pid=2014 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 00:02:29.101000 audit[2014]: CRED_REFR pid=2014 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 00:02:29.101615 sudo[2014]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 00:02:29.101862 sudo[2014]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 13 00:02:29.103000 audit[2014]: USER_START pid=2014 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 00:02:29.135577 systemd[1]: Starting docker.service... Aug 13 00:02:29.180846 env[2024]: time="2025-08-13T00:02:29.180796002Z" level=info msg="Starting up" Aug 13 00:02:29.182277 env[2024]: time="2025-08-13T00:02:29.182249770Z" level=info msg="parsed scheme: \"unix\"" module=grpc Aug 13 00:02:29.182277 env[2024]: time="2025-08-13T00:02:29.182271250Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Aug 13 00:02:29.182398 env[2024]: time="2025-08-13T00:02:29.182289730Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Aug 13 00:02:29.182398 env[2024]: time="2025-08-13T00:02:29.182300971Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Aug 13 00:02:29.183979 env[2024]: time="2025-08-13T00:02:29.183958460Z" level=info msg="parsed scheme: \"unix\"" module=grpc Aug 13 00:02:29.184079 env[2024]: time="2025-08-13T00:02:29.184064181Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Aug 13 00:02:29.184146 env[2024]: time="2025-08-13T00:02:29.184129261Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Aug 13 00:02:29.184196 env[2024]: time="2025-08-13T00:02:29.184184582Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Aug 13 00:02:29.193096 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport950383293-merged.mount: Deactivated successfully. Aug 13 00:02:29.251514 env[2024]: time="2025-08-13T00:02:29.251471381Z" level=warning msg="Your kernel does not support cgroup blkio weight" Aug 13 00:02:29.251748 env[2024]: time="2025-08-13T00:02:29.251726342Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Aug 13 00:02:29.251968 env[2024]: time="2025-08-13T00:02:29.251954144Z" level=info msg="Loading containers: start." Aug 13 00:02:29.339000 audit[2051]: NETFILTER_CFG table=nat:5 family=2 entries=2 op=nft_register_chain pid=2051 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:02:29.339000 audit[2051]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffd9fb8580 a2=0 a3=1 items=0 ppid=2024 pid=2051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:02:29.339000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Aug 13 00:02:29.341000 audit[2053]: NETFILTER_CFG table=filter:6 family=2 entries=2 op=nft_register_chain pid=2053 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:02:29.341000 audit[2053]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffc7ef1da0 a2=0 a3=1 items=0 ppid=2024 pid=2053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:02:29.341000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Aug 13 00:02:29.342000 audit[2055]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2055 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:02:29.342000 audit[2055]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffc6e879c0 a2=0 a3=1 items=0 ppid=2024 pid=2055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:02:29.342000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Aug 13 00:02:29.344000 audit[2057]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2057 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:02:29.344000 audit[2057]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffe0206c40 a2=0 a3=1 items=0 ppid=2024 pid=2057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:02:29.344000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Aug 13 00:02:29.346000 audit[2059]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=2059 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:02:29.346000 audit[2059]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffe05f29b0 a2=0 a3=1 items=0 ppid=2024 pid=2059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:02:29.346000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Aug 13 00:02:29.347000 audit[2061]: NETFILTER_CFG table=filter:10 family=2 entries=1 op=nft_register_rule pid=2061 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:02:29.347000 audit[2061]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffd5812260 a2=0 a3=1 items=0 ppid=2024 pid=2061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:02:29.347000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Aug 13 00:02:29.365000 audit[2063]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_register_chain pid=2063 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:02:29.365000 audit[2063]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe7da7170 a2=0 a3=1 items=0 ppid=2024 pid=2063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:02:29.365000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Aug 13 00:02:29.367000 audit[2065]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2065 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:02:29.367000 audit[2065]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffd2d74eb0 a2=0 a3=1 items=0 ppid=2024 pid=2065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:02:29.367000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Aug 13 00:02:29.368000 audit[2067]: NETFILTER_CFG table=filter:13 family=2 entries=2 op=nft_register_chain pid=2067 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:02:29.368000 audit[2067]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=308 a0=3 a1=ffffc9657ba0 a2=0 a3=1 items=0 ppid=2024 pid=2067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:02:29.368000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Aug 13 00:02:29.387000 audit[2071]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_unregister_rule pid=2071 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:02:29.387000 audit[2071]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffd321c170 a2=0 a3=1 items=0 ppid=2024 pid=2071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:02:29.387000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Aug 13 00:02:29.392000 audit[2072]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=2072 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:02:29.392000 audit[2072]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffd0337eb0 a2=0 a3=1 items=0 ppid=2024 pid=2072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:02:29.392000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Aug 13 00:02:29.472695 kernel: Initializing XFRM netlink socket Aug 13 00:02:29.507194 env[2024]: time="2025-08-13T00:02:29.507158057Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Aug 13 00:02:29.635000 audit[2079]: NETFILTER_CFG table=nat:16 family=2 entries=2 op=nft_register_chain pid=2079 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:02:29.635000 audit[2079]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=492 a0=3 a1=ffffd1de4640 a2=0 a3=1 items=0 ppid=2024 pid=2079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:02:29.635000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Aug 13 00:02:29.679000 audit[2082]: NETFILTER_CFG table=nat:17 family=2 entries=1 op=nft_register_rule pid=2082 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:02:29.679000 audit[2082]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=fffff54e3530 a2=0 a3=1 items=0 ppid=2024 pid=2082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:02:29.679000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Aug 13 00:02:29.682000 audit[2085]: NETFILTER_CFG table=filter:18 family=2 entries=1 op=nft_register_rule pid=2085 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:02:29.682000 audit[2085]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffcba40c30 a2=0 a3=1 items=0 ppid=2024 pid=2085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:02:29.682000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Aug 13 00:02:29.684000 audit[2087]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=2087 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:02:29.684000 audit[2087]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=fffff10f0900 a2=0 a3=1 items=0 ppid=2024 pid=2087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:02:29.684000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Aug 13 00:02:29.687000 audit[2089]: NETFILTER_CFG table=nat:20 family=2 entries=2 op=nft_register_chain pid=2089 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:02:29.687000 audit[2089]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=356 a0=3 a1=fffff1fc5540 a2=0 a3=1 items=0 ppid=2024 pid=2089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:02:29.687000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Aug 13 00:02:29.689000 audit[2091]: NETFILTER_CFG table=nat:21 family=2 entries=2 op=nft_register_chain pid=2091 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:02:29.689000 audit[2091]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=444 a0=3 a1=fffff40be2f0 a2=0 a3=1 items=0 ppid=2024 pid=2091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:02:29.689000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Aug 13 00:02:29.691000 audit[2093]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=2093 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:02:29.691000 audit[2093]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=304 a0=3 a1=ffffc9547800 a2=0 a3=1 items=0 ppid=2024 pid=2093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:02:29.691000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Aug 13 00:02:29.693000 audit[2095]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=2095 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:02:29.693000 audit[2095]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=508 a0=3 a1=ffffd970f1d0 a2=0 a3=1 items=0 ppid=2024 pid=2095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:02:29.693000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Aug 13 00:02:29.695000 audit[2097]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_register_rule pid=2097 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:02:29.695000 audit[2097]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=240 a0=3 a1=ffffeb1a9190 a2=0 a3=1 items=0 ppid=2024 pid=2097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:02:29.695000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Aug 13 00:02:29.696000 audit[2099]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=2099 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:02:29.696000 audit[2099]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffddb01320 a2=0 a3=1 items=0 ppid=2024 pid=2099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:02:29.696000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Aug 13 00:02:29.698000 audit[2101]: NETFILTER_CFG table=filter:26 family=2 entries=1 op=nft_register_rule pid=2101 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:02:29.698000 audit[2101]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffc110a940 a2=0 a3=1 items=0 ppid=2024 pid=2101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:02:29.698000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Aug 13 00:02:29.700012 systemd-networkd[1759]: docker0: Link UP Aug 13 00:02:29.720000 audit[2105]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_unregister_rule pid=2105 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:02:29.720000 audit[2105]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffd785b520 a2=0 a3=1 items=0 ppid=2024 pid=2105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:02:29.720000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Aug 13 00:02:29.729000 audit[2106]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_rule pid=2106 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:02:29.729000 audit[2106]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffe6373890 a2=0 a3=1 items=0 ppid=2024 pid=2106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:02:29.729000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Aug 13 00:02:29.730139 env[2024]: time="2025-08-13T00:02:29.730105419Z" level=info msg="Loading containers: done." Aug 13 00:02:29.741865 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck223523222-merged.mount: Deactivated successfully. Aug 13 00:02:29.775283 env[2024]: time="2025-08-13T00:02:29.775228686Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 00:02:29.775458 env[2024]: time="2025-08-13T00:02:29.775429847Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Aug 13 00:02:29.775561 env[2024]: time="2025-08-13T00:02:29.775535968Z" level=info msg="Daemon has completed initialization" Aug 13 00:02:29.810026 systemd[1]: Started docker.service. Aug 13 00:02:29.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:02:29.818512 env[2024]: time="2025-08-13T00:02:29.818268701Z" level=info msg="API listen on /run/docker.sock" Aug 13 00:02:33.349361 env[1583]: time="2025-08-13T00:02:33.349313918Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\"" Aug 13 00:02:34.183469 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1971342171.mount: Deactivated successfully. Aug 13 00:02:35.160330 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Aug 13 00:02:35.160496 systemd[1]: Stopped kubelet.service. Aug 13 00:02:35.183835 kernel: kauditd_printk_skb: 84 callbacks suppressed Aug 13 00:02:35.183879 kernel: audit: type=1130 audit(1755043355.159:209): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:02:35.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:02:35.162088 systemd[1]: Starting kubelet.service... Aug 13 00:02:35.214051 kernel: audit: type=1131 audit(1755043355.159:210): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:02:35.159000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:02:35.266502 systemd[1]: Started kubelet.service. Aug 13 00:02:35.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:02:35.289686 kernel: audit: type=1130 audit(1755043355.266:211): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:02:35.395817 kubelet[2145]: E0813 00:02:35.395763 2145 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:02:35.397228 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:02:35.397374 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:02:35.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Aug 13 00:02:35.416926 kernel: audit: type=1131 audit(1755043355.397:212): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Aug 13 00:02:35.726694 env[1583]: time="2025-08-13T00:02:35.726565677Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:35.731762 env[1583]: time="2025-08-13T00:02:35.731713178Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:00a68b619a4bfa14c989a2181a7aa0726a5cb1272a7f65394e6a594ad6eade27,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:35.735251 env[1583]: time="2025-08-13T00:02:35.735213152Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:35.738438 env[1583]: time="2025-08-13T00:02:35.738403085Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:35.739241 env[1583]: time="2025-08-13T00:02:35.739211088Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\" returns image reference \"sha256:00a68b619a4bfa14c989a2181a7aa0726a5cb1272a7f65394e6a594ad6eade27\"" Aug 13 00:02:35.740752 env[1583]: time="2025-08-13T00:02:35.740726254Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\"" Aug 13 00:02:37.131593 env[1583]: time="2025-08-13T00:02:37.131544819Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:37.136087 env[1583]: time="2025-08-13T00:02:37.136053795Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5c5dc52b837451e0fe6108fdfb9cfa431191ce227ce71d103dec8a8c655c4e71,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:37.140029 env[1583]: time="2025-08-13T00:02:37.139986449Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:37.143219 env[1583]: time="2025-08-13T00:02:37.143192260Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:37.143918 env[1583]: time="2025-08-13T00:02:37.143889903Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\" returns image reference \"sha256:5c5dc52b837451e0fe6108fdfb9cfa431191ce227ce71d103dec8a8c655c4e71\"" Aug 13 00:02:37.144556 env[1583]: time="2025-08-13T00:02:37.144531705Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\"" Aug 13 00:02:38.262977 env[1583]: time="2025-08-13T00:02:38.262920564Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:38.267648 env[1583]: time="2025-08-13T00:02:38.267602620Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:89be0efdc4ab1793b9b1b05e836e33dc50f5b2911b57609b315b58608b2d3746,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:38.270867 env[1583]: time="2025-08-13T00:02:38.270834150Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:38.273885 env[1583]: time="2025-08-13T00:02:38.273844520Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:38.274697 env[1583]: time="2025-08-13T00:02:38.274647883Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\" returns image reference \"sha256:89be0efdc4ab1793b9b1b05e836e33dc50f5b2911b57609b315b58608b2d3746\"" Aug 13 00:02:38.275207 env[1583]: time="2025-08-13T00:02:38.275171245Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\"" Aug 13 00:02:39.358506 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2016469058.mount: Deactivated successfully. Aug 13 00:02:39.825017 env[1583]: time="2025-08-13T00:02:39.824971335Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:39.830747 env[1583]: time="2025-08-13T00:02:39.830709273Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d1e7db6660181423f98acbe3a495b3fe5cec9b85cdef245540cc2cb3b180ab0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:39.834223 env[1583]: time="2025-08-13T00:02:39.834189683Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:39.841182 env[1583]: time="2025-08-13T00:02:39.841145345Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:39.841756 env[1583]: time="2025-08-13T00:02:39.841725267Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\" returns image reference \"sha256:7d1e7db6660181423f98acbe3a495b3fe5cec9b85cdef245540cc2cb3b180ab0\"" Aug 13 00:02:39.842277 env[1583]: time="2025-08-13T00:02:39.842246788Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 00:02:40.547109 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3119791692.mount: Deactivated successfully. Aug 13 00:02:41.929137 env[1583]: time="2025-08-13T00:02:41.929078807Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:41.933542 env[1583]: time="2025-08-13T00:02:41.933498010Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:41.936959 env[1583]: time="2025-08-13T00:02:41.936888573Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:41.940146 env[1583]: time="2025-08-13T00:02:41.940102495Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:41.941085 env[1583]: time="2025-08-13T00:02:41.941053176Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Aug 13 00:02:41.941730 env[1583]: time="2025-08-13T00:02:41.941701096Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 00:02:42.503445 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3133159075.mount: Deactivated successfully. Aug 13 00:02:42.519554 env[1583]: time="2025-08-13T00:02:42.519513926Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:42.524797 env[1583]: time="2025-08-13T00:02:42.524745930Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:42.527918 env[1583]: time="2025-08-13T00:02:42.527891172Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:42.531014 env[1583]: time="2025-08-13T00:02:42.530976175Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:42.531568 env[1583]: time="2025-08-13T00:02:42.531540975Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Aug 13 00:02:42.532106 env[1583]: time="2025-08-13T00:02:42.532081736Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Aug 13 00:02:43.158977 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1358957556.mount: Deactivated successfully. Aug 13 00:02:45.410343 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Aug 13 00:02:45.410518 systemd[1]: Stopped kubelet.service. Aug 13 00:02:45.409000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:02:45.412075 systemd[1]: Starting kubelet.service... Aug 13 00:02:45.409000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:02:45.450507 kernel: audit: type=1130 audit(1755043365.409:213): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:02:45.450574 kernel: audit: type=1131 audit(1755043365.409:214): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:02:46.125062 systemd[1]: Started kubelet.service. Aug 13 00:02:46.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:02:46.149698 kernel: audit: type=1130 audit(1755043366.124:215): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:02:46.570206 kubelet[2160]: E0813 00:02:46.570151 2160 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:02:46.571996 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:02:46.572159 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:02:46.571000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Aug 13 00:02:46.591690 kernel: audit: type=1131 audit(1755043366.571:216): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Aug 13 00:02:47.370697 env[1583]: time="2025-08-13T00:02:47.370595689Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:47.377587 env[1583]: time="2025-08-13T00:02:47.377533774Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:47.384686 env[1583]: time="2025-08-13T00:02:47.384630578Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:47.388258 env[1583]: time="2025-08-13T00:02:47.388222941Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:47.388786 env[1583]: time="2025-08-13T00:02:47.388758461Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Aug 13 00:02:53.757430 systemd[1]: Stopped kubelet.service. Aug 13 00:02:53.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:02:53.760268 systemd[1]: Starting kubelet.service... Aug 13 00:02:53.757000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:02:53.777737 kernel: audit: type=1130 audit(1755043373.757:217): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:02:53.798917 kernel: audit: type=1131 audit(1755043373.757:218): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:02:53.825005 systemd[1]: Reloading. Aug 13 00:02:53.889821 /usr/lib/systemd/system-generators/torcx-generator[2212]: time="2025-08-13T00:02:53Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 00:02:53.890202 /usr/lib/systemd/system-generators/torcx-generator[2212]: time="2025-08-13T00:02:53Z" level=info msg="torcx already run" Aug 13 00:02:53.997116 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:02:53.997139 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:02:54.014875 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:02:54.105822 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 00:02:54.105894 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 00:02:54.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Aug 13 00:02:54.106176 systemd[1]: Stopped kubelet.service. Aug 13 00:02:54.108346 systemd[1]: Starting kubelet.service... Aug 13 00:02:54.132708 kernel: audit: type=1130 audit(1755043374.105:219): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Aug 13 00:02:54.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:02:54.312799 systemd[1]: Started kubelet.service. Aug 13 00:02:54.334742 kernel: audit: type=1130 audit(1755043374.312:220): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:02:54.361857 kubelet[2292]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:02:54.361857 kubelet[2292]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 00:02:54.361857 kubelet[2292]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:02:54.362502 kubelet[2292]: I0813 00:02:54.361925 2292 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:02:55.196139 kubelet[2292]: I0813 00:02:55.196101 2292 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 00:02:55.196298 kubelet[2292]: I0813 00:02:55.196286 2292 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:02:55.196602 kubelet[2292]: I0813 00:02:55.196586 2292 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 00:02:55.220292 kubelet[2292]: I0813 00:02:55.220263 2292 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:02:55.223075 kubelet[2292]: E0813 00:02:55.223036 2292 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.35:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.35:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:02:55.229721 kubelet[2292]: E0813 00:02:55.229678 2292 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:02:55.229849 kubelet[2292]: I0813 00:02:55.229835 2292 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:02:55.233832 kubelet[2292]: I0813 00:02:55.233807 2292 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:02:55.234889 kubelet[2292]: I0813 00:02:55.234867 2292 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 00:02:55.235141 kubelet[2292]: I0813 00:02:55.235110 2292 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:02:55.235383 kubelet[2292]: I0813 00:02:55.235209 2292 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-a-dd293077f6","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Aug 13 00:02:55.235524 kubelet[2292]: I0813 00:02:55.235511 2292 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:02:55.235587 kubelet[2292]: I0813 00:02:55.235578 2292 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 00:02:55.235777 kubelet[2292]: I0813 00:02:55.235761 2292 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:02:55.240387 kubelet[2292]: I0813 00:02:55.240344 2292 kubelet.go:408] "Attempting to sync node with API server" Aug 13 00:02:55.240514 kubelet[2292]: I0813 00:02:55.240501 2292 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:02:55.240586 kubelet[2292]: I0813 00:02:55.240577 2292 kubelet.go:314] "Adding apiserver pod source" Aug 13 00:02:55.240687 kubelet[2292]: I0813 00:02:55.240675 2292 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:02:55.246627 kubelet[2292]: W0813 00:02:55.246563 2292 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-a-dd293077f6&limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused Aug 13 00:02:55.246758 kubelet[2292]: E0813 00:02:55.246638 2292 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-a-dd293077f6&limit=500&resourceVersion=0\": dial tcp 10.200.20.35:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:02:55.248193 kubelet[2292]: W0813 00:02:55.248151 2292 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused Aug 13 00:02:55.248326 kubelet[2292]: E0813 00:02:55.248309 2292 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.35:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:02:55.248475 kubelet[2292]: I0813 00:02:55.248459 2292 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Aug 13 00:02:55.249044 kubelet[2292]: I0813 00:02:55.249027 2292 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:02:55.249173 kubelet[2292]: W0813 00:02:55.249161 2292 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 00:02:55.250080 kubelet[2292]: I0813 00:02:55.250058 2292 server.go:1274] "Started kubelet" Aug 13 00:02:55.256000 audit[2292]: AVC avc: denied { mac_admin } for pid=2292 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:02:55.262832 kubelet[2292]: I0813 00:02:55.258721 2292 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Aug 13 00:02:55.262832 kubelet[2292]: I0813 00:02:55.258784 2292 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Aug 13 00:02:55.262832 kubelet[2292]: I0813 00:02:55.258907 2292 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:02:55.270090 kubelet[2292]: I0813 00:02:55.270044 2292 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:02:55.271422 kubelet[2292]: I0813 00:02:55.271403 2292 server.go:449] "Adding debug handlers to kubelet server" Aug 13 00:02:55.272744 kubelet[2292]: I0813 00:02:55.272698 2292 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:02:55.273040 kubelet[2292]: I0813 00:02:55.273027 2292 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:02:55.273313 kubelet[2292]: I0813 00:02:55.273297 2292 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:02:55.274536 kubelet[2292]: I0813 00:02:55.274518 2292 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 00:02:55.274851 kubelet[2292]: E0813 00:02:55.274829 2292 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-a-dd293077f6\" not found" Aug 13 00:02:55.256000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 00:02:55.277636 kubelet[2292]: I0813 00:02:55.277615 2292 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 00:02:55.277821 kubelet[2292]: I0813 00:02:55.277809 2292 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:02:55.285852 kernel: audit: type=1400 audit(1755043375.256:221): avc: denied { mac_admin } for pid=2292 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:02:55.285974 kernel: audit: type=1401 audit(1755043375.256:221): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 00:02:55.286001 kernel: audit: type=1300 audit(1755043375.256:221): arch=c00000b7 syscall=5 success=no exit=-22 a0=40004746c0 a1=40008a07c8 a2=40004745a0 a3=25 items=0 ppid=1 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:02:55.256000 audit[2292]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=40004746c0 a1=40008a07c8 a2=40004745a0 a3=25 items=0 ppid=1 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:02:55.297576 kubelet[2292]: E0813 00:02:55.297509 2292 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-a-dd293077f6?timeout=10s\": dial tcp 10.200.20.35:6443: connect: connection refused" interval="200ms" Aug 13 00:02:55.256000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 00:02:55.336142 kernel: audit: type=1327 audit(1755043375.256:221): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 00:02:55.256000 audit[2292]: AVC avc: denied { mac_admin } for pid=2292 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:02:55.354226 kernel: audit: type=1400 audit(1755043375.256:222): avc: denied { mac_admin } for pid=2292 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:02:55.354364 kernel: audit: type=1401 audit(1755043375.256:222): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 00:02:55.256000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 00:02:55.256000 audit[2292]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=40008d8500 a1=40008a07e0 a2=4000474810 a3=25 items=0 ppid=1 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:02:55.256000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 00:02:55.267000 audit[2303]: NETFILTER_CFG table=mangle:29 family=2 entries=2 op=nft_register_chain pid=2303 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:02:55.267000 audit[2303]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffdcd1e250 a2=0 a3=1 items=0 ppid=2292 pid=2303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:02:55.267000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Aug 13 00:02:55.267000 audit[2304]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_chain pid=2304 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:02:55.267000 audit[2304]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffca8eaf50 a2=0 a3=1 items=0 ppid=2292 pid=2304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:02:55.267000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Aug 13 00:02:55.275000 audit[2306]: NETFILTER_CFG table=filter:31 family=2 entries=2 op=nft_register_chain pid=2306 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:02:55.275000 audit[2306]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffc3b12900 a2=0 a3=1 items=0 ppid=2292 pid=2306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:02:55.275000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Aug 13 00:02:55.275000 audit[2308]: NETFILTER_CFG table=filter:32 family=2 entries=2 op=nft_register_chain pid=2308 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:02:55.275000 audit[2308]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffcdf2b7b0 a2=0 a3=1 items=0 ppid=2292 pid=2308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:02:55.275000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Aug 13 00:02:55.365422 kubelet[2292]: W0813 00:02:55.365348 2292 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused Aug 13 00:02:55.365813 kubelet[2292]: E0813 00:02:55.365791 2292 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.35:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:02:55.366858 kubelet[2292]: E0813 00:02:55.364099 2292 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.35:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.35:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.8-a-dd293077f6.185b2aa6b32e5959 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.8-a-dd293077f6,UID:ci-3510.3.8-a-dd293077f6,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.8-a-dd293077f6,},FirstTimestamp:2025-08-13 00:02:55.250037081 +0000 UTC m=+0.931442495,LastTimestamp:2025-08-13 00:02:55.250037081 +0000 UTC m=+0.931442495,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.8-a-dd293077f6,}" Aug 13 00:02:55.367479 kubelet[2292]: E0813 00:02:55.367460 2292 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:02:55.367604 kubelet[2292]: I0813 00:02:55.367577 2292 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:02:55.367604 kubelet[2292]: I0813 00:02:55.367596 2292 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:02:55.367721 kubelet[2292]: I0813 00:02:55.367698 2292 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:02:55.390730 kubelet[2292]: E0813 00:02:55.389799 2292 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-a-dd293077f6\" not found" Aug 13 00:02:55.394000 audit[2315]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=2315 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:02:55.394000 audit[2315]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=fffffc40d880 a2=0 a3=1 items=0 ppid=2292 pid=2315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:02:55.394000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Aug 13 00:02:55.395645 kubelet[2292]: I0813 00:02:55.395612 2292 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:02:55.395000 audit[2316]: NETFILTER_CFG table=mangle:34 family=10 entries=2 op=nft_register_chain pid=2316 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:02:55.395000 audit[2316]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffd4ee0ff0 a2=0 a3=1 items=0 ppid=2292 pid=2316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:02:55.395000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Aug 13 00:02:55.397367 kubelet[2292]: I0813 00:02:55.397344 2292 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:02:55.397464 kubelet[2292]: I0813 00:02:55.397452 2292 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 00:02:55.397543 kubelet[2292]: I0813 00:02:55.397533 2292 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 00:02:55.397649 kubelet[2292]: E0813 00:02:55.397630 2292 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:02:55.397000 audit[2317]: NETFILTER_CFG table=mangle:35 family=2 entries=1 op=nft_register_chain pid=2317 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:02:55.397000 audit[2317]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffffc35810 a2=0 a3=1 items=0 ppid=2292 pid=2317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:02:55.397000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Aug 13 00:02:55.398000 audit[2318]: NETFILTER_CFG table=mangle:36 family=10 entries=1 op=nft_register_chain pid=2318 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:02:55.398000 audit[2318]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffff4af060 a2=0 a3=1 items=0 ppid=2292 pid=2318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:02:55.398000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Aug 13 00:02:55.398000 audit[2319]: NETFILTER_CFG table=nat:37 family=2 entries=1 op=nft_register_chain pid=2319 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:02:55.398000 audit[2319]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc0f9b6b0 a2=0 a3=1 items=0 ppid=2292 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:02:55.398000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Aug 13 00:02:55.399000 audit[2320]: NETFILTER_CFG table=nat:38 family=10 entries=2 op=nft_register_chain pid=2320 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:02:55.399000 audit[2320]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=128 a0=3 a1=ffffcfbc0af0 a2=0 a3=1 items=0 ppid=2292 pid=2320 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:02:55.399000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Aug 13 00:02:55.400000 audit[2321]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_chain pid=2321 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:02:55.400000 audit[2321]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff993fb50 a2=0 a3=1 items=0 ppid=2292 pid=2321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:02:55.400000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Aug 13 00:02:55.400000 audit[2322]: NETFILTER_CFG table=filter:40 family=10 entries=2 op=nft_register_chain pid=2322 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:02:55.400000 audit[2322]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffff3346180 a2=0 a3=1 items=0 ppid=2292 pid=2322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:02:55.400000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Aug 13 00:02:55.401950 kubelet[2292]: W0813 00:02:55.401920 2292 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused Aug 13 00:02:55.402099 kubelet[2292]: E0813 00:02:55.402074 2292 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.35:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:02:55.435628 kubelet[2292]: I0813 00:02:55.435598 2292 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 00:02:55.435805 kubelet[2292]: I0813 00:02:55.435790 2292 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 00:02:55.435883 kubelet[2292]: I0813 00:02:55.435874 2292 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:02:55.451045 kubelet[2292]: I0813 00:02:55.450967 2292 policy_none.go:49] "None policy: Start" Aug 13 00:02:55.453017 kubelet[2292]: I0813 00:02:55.452971 2292 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 00:02:55.453180 kubelet[2292]: I0813 00:02:55.453168 2292 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:02:55.460033 kubelet[2292]: I0813 00:02:55.460006 2292 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:02:55.459000 audit[2292]: AVC avc: denied { mac_admin } for pid=2292 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:02:55.459000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 00:02:55.459000 audit[2292]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000d48480 a1=400085b578 a2=4000d48450 a3=25 items=0 ppid=1 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:02:55.459000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 00:02:55.460442 kubelet[2292]: I0813 00:02:55.460423 2292 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Aug 13 00:02:55.460620 kubelet[2292]: I0813 00:02:55.460606 2292 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:02:55.460736 kubelet[2292]: I0813 00:02:55.460701 2292 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:02:55.462112 kubelet[2292]: I0813 00:02:55.462089 2292 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:02:55.463436 kubelet[2292]: E0813 00:02:55.463419 2292 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.8-a-dd293077f6\" not found" Aug 13 00:02:55.498928 kubelet[2292]: E0813 00:02:55.498879 2292 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-a-dd293077f6?timeout=10s\": dial tcp 10.200.20.35:6443: connect: connection refused" interval="400ms" Aug 13 00:02:55.563008 kubelet[2292]: I0813 00:02:55.562972 2292 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-a-dd293077f6" Aug 13 00:02:55.563401 kubelet[2292]: E0813 00:02:55.563370 2292 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.35:6443/api/v1/nodes\": dial tcp 10.200.20.35:6443: connect: connection refused" node="ci-3510.3.8-a-dd293077f6" Aug 13 00:02:55.586868 kubelet[2292]: I0813 00:02:55.586823 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/02e7e21fb8dd2d31422d595630052744-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-a-dd293077f6\" (UID: \"02e7e21fb8dd2d31422d595630052744\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-dd293077f6" Aug 13 00:02:55.586868 kubelet[2292]: I0813 00:02:55.586870 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/02e7e21fb8dd2d31422d595630052744-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-a-dd293077f6\" (UID: \"02e7e21fb8dd2d31422d595630052744\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-dd293077f6" Aug 13 00:02:55.586996 kubelet[2292]: I0813 00:02:55.586888 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8dca2b2e0ee88b447d5a5b3e53201283-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-a-dd293077f6\" (UID: \"8dca2b2e0ee88b447d5a5b3e53201283\") " pod="kube-system/kube-apiserver-ci-3510.3.8-a-dd293077f6" Aug 13 00:02:55.586996 kubelet[2292]: I0813 00:02:55.586909 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8dca2b2e0ee88b447d5a5b3e53201283-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-a-dd293077f6\" (UID: \"8dca2b2e0ee88b447d5a5b3e53201283\") " pod="kube-system/kube-apiserver-ci-3510.3.8-a-dd293077f6" Aug 13 00:02:55.586996 kubelet[2292]: I0813 00:02:55.586929 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/02e7e21fb8dd2d31422d595630052744-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-a-dd293077f6\" (UID: \"02e7e21fb8dd2d31422d595630052744\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-dd293077f6" Aug 13 00:02:55.586996 kubelet[2292]: I0813 00:02:55.586946 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/02e7e21fb8dd2d31422d595630052744-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-a-dd293077f6\" (UID: \"02e7e21fb8dd2d31422d595630052744\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-dd293077f6" Aug 13 00:02:55.586996 kubelet[2292]: I0813 00:02:55.586962 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/02e7e21fb8dd2d31422d595630052744-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-a-dd293077f6\" (UID: \"02e7e21fb8dd2d31422d595630052744\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-dd293077f6" Aug 13 00:02:55.587103 kubelet[2292]: I0813 00:02:55.586978 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/38417e2e82f35901f1470743ad919a4c-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-a-dd293077f6\" (UID: \"38417e2e82f35901f1470743ad919a4c\") " pod="kube-system/kube-scheduler-ci-3510.3.8-a-dd293077f6" Aug 13 00:02:55.587103 kubelet[2292]: I0813 00:02:55.586995 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8dca2b2e0ee88b447d5a5b3e53201283-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-a-dd293077f6\" (UID: \"8dca2b2e0ee88b447d5a5b3e53201283\") " pod="kube-system/kube-apiserver-ci-3510.3.8-a-dd293077f6" Aug 13 00:02:55.765781 kubelet[2292]: I0813 00:02:55.765696 2292 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-a-dd293077f6" Aug 13 00:02:55.766725 kubelet[2292]: E0813 00:02:55.766688 2292 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.35:6443/api/v1/nodes\": dial tcp 10.200.20.35:6443: connect: connection refused" node="ci-3510.3.8-a-dd293077f6" Aug 13 00:02:55.804230 env[1583]: time="2025-08-13T00:02:55.803932648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-a-dd293077f6,Uid:8dca2b2e0ee88b447d5a5b3e53201283,Namespace:kube-system,Attempt:0,}" Aug 13 00:02:55.808524 env[1583]: time="2025-08-13T00:02:55.808482131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-a-dd293077f6,Uid:02e7e21fb8dd2d31422d595630052744,Namespace:kube-system,Attempt:0,}" Aug 13 00:02:55.811489 env[1583]: time="2025-08-13T00:02:55.811342572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-a-dd293077f6,Uid:38417e2e82f35901f1470743ad919a4c,Namespace:kube-system,Attempt:0,}" Aug 13 00:02:55.900003 kubelet[2292]: E0813 00:02:55.899951 2292 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-a-dd293077f6?timeout=10s\": dial tcp 10.200.20.35:6443: connect: connection refused" interval="800ms" Aug 13 00:02:56.168687 kubelet[2292]: I0813 00:02:56.168403 2292 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-a-dd293077f6" Aug 13 00:02:56.168834 kubelet[2292]: E0813 00:02:56.168771 2292 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.35:6443/api/v1/nodes\": dial tcp 10.200.20.35:6443: connect: connection refused" node="ci-3510.3.8-a-dd293077f6" Aug 13 00:02:56.190540 kubelet[2292]: W0813 00:02:56.190485 2292 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused Aug 13 00:02:56.190636 kubelet[2292]: E0813 00:02:56.190552 2292 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.35:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:02:56.424557 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1346286543.mount: Deactivated successfully. Aug 13 00:02:56.438380 kubelet[2292]: W0813 00:02:56.438270 2292 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-a-dd293077f6&limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused Aug 13 00:02:56.438380 kubelet[2292]: E0813 00:02:56.438337 2292 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-a-dd293077f6&limit=500&resourceVersion=0\": dial tcp 10.200.20.35:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:02:56.445015 env[1583]: time="2025-08-13T00:02:56.444974415Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:56.458306 env[1583]: time="2025-08-13T00:02:56.458257062Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:56.465632 env[1583]: time="2025-08-13T00:02:56.465598546Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:56.467836 env[1583]: time="2025-08-13T00:02:56.467795947Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:56.470540 env[1583]: time="2025-08-13T00:02:56.470515108Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:56.476473 env[1583]: time="2025-08-13T00:02:56.476446911Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:56.479262 env[1583]: time="2025-08-13T00:02:56.479238193Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:56.483256 env[1583]: time="2025-08-13T00:02:56.483231675Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:56.486354 env[1583]: time="2025-08-13T00:02:56.486328436Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:56.489275 env[1583]: time="2025-08-13T00:02:56.489235838Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:56.497718 env[1583]: time="2025-08-13T00:02:56.497689202Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:56.502277 env[1583]: time="2025-08-13T00:02:56.502249404Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:56.538614 env[1583]: time="2025-08-13T00:02:56.538546583Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:02:56.538853 env[1583]: time="2025-08-13T00:02:56.538804063Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:02:56.538853 env[1583]: time="2025-08-13T00:02:56.538825263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:02:56.539163 env[1583]: time="2025-08-13T00:02:56.539096383Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/16899eb99804a44c2d05cf964ffcbaf0a91bb2bc97b5c00448184e365c286e2e pid=2331 runtime=io.containerd.runc.v2 Aug 13 00:02:56.577753 env[1583]: time="2025-08-13T00:02:56.577671403Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:02:56.577753 env[1583]: time="2025-08-13T00:02:56.577719243Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:02:56.577753 env[1583]: time="2025-08-13T00:02:56.577729203Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:02:56.578027 env[1583]: time="2025-08-13T00:02:56.577841443Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/948b63da8a5788773a60b2fa9dff246d8cfee6a74249066e13126232a0ea307f pid=2372 runtime=io.containerd.runc.v2 Aug 13 00:02:56.585313 env[1583]: time="2025-08-13T00:02:56.585232366Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:02:56.585475 env[1583]: time="2025-08-13T00:02:56.585306806Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:02:56.585475 env[1583]: time="2025-08-13T00:02:56.585320206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:02:56.585728 env[1583]: time="2025-08-13T00:02:56.585646927Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b1a30f45a2fdee5c2f3348596aa2fe770fff382edee64abfca07dcf8f7cb698c pid=2369 runtime=io.containerd.runc.v2 Aug 13 00:02:56.596618 env[1583]: time="2025-08-13T00:02:56.596576972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-a-dd293077f6,Uid:38417e2e82f35901f1470743ad919a4c,Namespace:kube-system,Attempt:0,} returns sandbox id \"16899eb99804a44c2d05cf964ffcbaf0a91bb2bc97b5c00448184e365c286e2e\"" Aug 13 00:02:56.600151 env[1583]: time="2025-08-13T00:02:56.600111334Z" level=info msg="CreateContainer within sandbox \"16899eb99804a44c2d05cf964ffcbaf0a91bb2bc97b5c00448184e365c286e2e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 00:02:56.640679 env[1583]: time="2025-08-13T00:02:56.640193434Z" level=info msg="CreateContainer within sandbox \"16899eb99804a44c2d05cf964ffcbaf0a91bb2bc97b5c00448184e365c286e2e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9a9eb52b475a376cbd9e2be67ab83d97e69366e2e63347fad718854916377de3\"" Aug 13 00:02:56.641398 env[1583]: time="2025-08-13T00:02:56.641372315Z" level=info msg="StartContainer for \"9a9eb52b475a376cbd9e2be67ab83d97e69366e2e63347fad718854916377de3\"" Aug 13 00:02:56.652467 kubelet[2292]: W0813 00:02:56.652338 2292 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused Aug 13 00:02:56.652467 kubelet[2292]: E0813 00:02:56.652414 2292 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.35:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:02:56.655004 env[1583]: time="2025-08-13T00:02:56.654964322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-a-dd293077f6,Uid:8dca2b2e0ee88b447d5a5b3e53201283,Namespace:kube-system,Attempt:0,} returns sandbox id \"948b63da8a5788773a60b2fa9dff246d8cfee6a74249066e13126232a0ea307f\"" Aug 13 00:02:56.657586 env[1583]: time="2025-08-13T00:02:56.657547803Z" level=info msg="CreateContainer within sandbox \"948b63da8a5788773a60b2fa9dff246d8cfee6a74249066e13126232a0ea307f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 00:02:56.676911 env[1583]: time="2025-08-13T00:02:56.676161692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-a-dd293077f6,Uid:02e7e21fb8dd2d31422d595630052744,Namespace:kube-system,Attempt:0,} returns sandbox id \"b1a30f45a2fdee5c2f3348596aa2fe770fff382edee64abfca07dcf8f7cb698c\"" Aug 13 00:02:56.680910 env[1583]: time="2025-08-13T00:02:56.680865215Z" level=info msg="CreateContainer within sandbox \"b1a30f45a2fdee5c2f3348596aa2fe770fff382edee64abfca07dcf8f7cb698c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 00:02:56.701356 kubelet[2292]: E0813 00:02:56.701287 2292 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-a-dd293077f6?timeout=10s\": dial tcp 10.200.20.35:6443: connect: connection refused" interval="1.6s" Aug 13 00:02:56.702857 env[1583]: time="2025-08-13T00:02:56.702816666Z" level=info msg="CreateContainer within sandbox \"948b63da8a5788773a60b2fa9dff246d8cfee6a74249066e13126232a0ea307f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"26f663672d3835e11a7c9bba25390dae5e56be0d1cbb85e9f6be756b48621789\"" Aug 13 00:02:56.703839 env[1583]: time="2025-08-13T00:02:56.703410186Z" level=info msg="StartContainer for \"26f663672d3835e11a7c9bba25390dae5e56be0d1cbb85e9f6be756b48621789\"" Aug 13 00:02:56.721069 env[1583]: time="2025-08-13T00:02:56.719396754Z" level=info msg="CreateContainer within sandbox \"b1a30f45a2fdee5c2f3348596aa2fe770fff382edee64abfca07dcf8f7cb698c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"864e00ab855cbced1af5c90a1dcc4c251467d341d12ade366b2937d247ecad8d\"" Aug 13 00:02:56.721849 env[1583]: time="2025-08-13T00:02:56.721816475Z" level=info msg="StartContainer for \"864e00ab855cbced1af5c90a1dcc4c251467d341d12ade366b2937d247ecad8d\"" Aug 13 00:02:56.722725 env[1583]: time="2025-08-13T00:02:56.722686476Z" level=info msg="StartContainer for \"9a9eb52b475a376cbd9e2be67ab83d97e69366e2e63347fad718854916377de3\" returns successfully" Aug 13 00:02:56.802756 env[1583]: time="2025-08-13T00:02:56.802710676Z" level=info msg="StartContainer for \"26f663672d3835e11a7c9bba25390dae5e56be0d1cbb85e9f6be756b48621789\" returns successfully" Aug 13 00:02:56.813958 kubelet[2292]: W0813 00:02:56.813852 2292 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused Aug 13 00:02:56.813958 kubelet[2292]: E0813 00:02:56.813923 2292 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.35:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:02:56.825322 env[1583]: time="2025-08-13T00:02:56.825271168Z" level=info msg="StartContainer for \"864e00ab855cbced1af5c90a1dcc4c251467d341d12ade366b2937d247ecad8d\" returns successfully" Aug 13 00:02:56.972596 kubelet[2292]: I0813 00:02:56.971322 2292 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-a-dd293077f6" Aug 13 00:02:59.349686 kubelet[2292]: E0813 00:02:59.349640 2292 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.8-a-dd293077f6\" not found" node="ci-3510.3.8-a-dd293077f6" Aug 13 00:02:59.441573 kubelet[2292]: E0813 00:02:59.441470 2292 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-3510.3.8-a-dd293077f6.185b2aa6b32e5959 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.8-a-dd293077f6,UID:ci-3510.3.8-a-dd293077f6,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.8-a-dd293077f6,},FirstTimestamp:2025-08-13 00:02:55.250037081 +0000 UTC m=+0.931442495,LastTimestamp:2025-08-13 00:02:55.250037081 +0000 UTC m=+0.931442495,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.8-a-dd293077f6,}" Aug 13 00:02:59.485322 kubelet[2292]: I0813 00:02:59.485288 2292 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510.3.8-a-dd293077f6" Aug 13 00:02:59.485569 kubelet[2292]: E0813 00:02:59.485546 2292 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-3510.3.8-a-dd293077f6\": node \"ci-3510.3.8-a-dd293077f6\" not found" Aug 13 00:03:00.248620 kubelet[2292]: I0813 00:03:00.248592 2292 apiserver.go:52] "Watching apiserver" Aug 13 00:03:00.278865 kubelet[2292]: I0813 00:03:00.278833 2292 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 00:03:01.903907 systemd[1]: Reloading. Aug 13 00:03:01.964056 /usr/lib/systemd/system-generators/torcx-generator[2579]: time="2025-08-13T00:03:01Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 00:03:01.964087 /usr/lib/systemd/system-generators/torcx-generator[2579]: time="2025-08-13T00:03:01Z" level=info msg="torcx already run" Aug 13 00:03:02.071678 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:03:02.071699 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:03:02.088780 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:03:02.176828 systemd[1]: Stopping kubelet.service... Aug 13 00:03:02.193091 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:03:02.193393 systemd[1]: Stopped kubelet.service. Aug 13 00:03:02.217576 kernel: kauditd_printk_skb: 42 callbacks suppressed Aug 13 00:03:02.217626 kernel: audit: type=1131 audit(1755043382.192:236): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:02.192000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:02.195854 systemd[1]: Starting kubelet.service... Aug 13 00:03:02.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:02.308423 systemd[1]: Started kubelet.service. Aug 13 00:03:02.329690 kernel: audit: type=1130 audit(1755043382.308:237): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:02.388491 kubelet[2655]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:03:02.388881 kubelet[2655]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 00:03:02.388933 kubelet[2655]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:03:02.389068 kubelet[2655]: I0813 00:03:02.389040 2655 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:03:02.396101 kubelet[2655]: I0813 00:03:02.396063 2655 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 00:03:02.396247 kubelet[2655]: I0813 00:03:02.396236 2655 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:03:02.396589 kubelet[2655]: I0813 00:03:02.396570 2655 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 00:03:02.398172 kubelet[2655]: I0813 00:03:02.398132 2655 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 00:03:02.400611 kubelet[2655]: I0813 00:03:02.400580 2655 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:03:02.405625 kubelet[2655]: E0813 00:03:02.405587 2655 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:03:02.405845 kubelet[2655]: I0813 00:03:02.405827 2655 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:03:02.411074 kubelet[2655]: I0813 00:03:02.411036 2655 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:03:02.411484 kubelet[2655]: I0813 00:03:02.411457 2655 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 00:03:02.411596 kubelet[2655]: I0813 00:03:02.411562 2655 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:03:02.411798 kubelet[2655]: I0813 00:03:02.411593 2655 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-a-dd293077f6","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Aug 13 00:03:02.411895 kubelet[2655]: I0813 00:03:02.411803 2655 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:03:02.411895 kubelet[2655]: I0813 00:03:02.411813 2655 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 00:03:02.411895 kubelet[2655]: I0813 00:03:02.411848 2655 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:03:02.411977 kubelet[2655]: I0813 00:03:02.411939 2655 kubelet.go:408] "Attempting to sync node with API server" Aug 13 00:03:02.411977 kubelet[2655]: I0813 00:03:02.411953 2655 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:03:02.411977 kubelet[2655]: I0813 00:03:02.411971 2655 kubelet.go:314] "Adding apiserver pod source" Aug 13 00:03:02.412047 kubelet[2655]: I0813 00:03:02.411980 2655 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:03:02.417188 kubelet[2655]: I0813 00:03:02.417160 2655 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Aug 13 00:03:02.417868 kubelet[2655]: I0813 00:03:02.417846 2655 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:03:02.418404 kubelet[2655]: I0813 00:03:02.418385 2655 server.go:1274] "Started kubelet" Aug 13 00:03:02.419000 audit[2655]: AVC avc: denied { mac_admin } for pid=2655 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:02.441139 kubelet[2655]: I0813 00:03:02.441103 2655 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Aug 13 00:03:02.441310 kubelet[2655]: I0813 00:03:02.441294 2655 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Aug 13 00:03:02.441397 kubelet[2655]: I0813 00:03:02.441385 2655 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:03:02.419000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 00:03:02.455073 kernel: audit: type=1400 audit(1755043382.419:238): avc: denied { mac_admin } for pid=2655 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:02.455172 kernel: audit: type=1401 audit(1755043382.419:238): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 00:03:02.455376 kubelet[2655]: I0813 00:03:02.455336 2655 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:03:02.456385 kubelet[2655]: I0813 00:03:02.456363 2655 server.go:449] "Adding debug handlers to kubelet server" Aug 13 00:03:02.457492 kubelet[2655]: I0813 00:03:02.457431 2655 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:03:02.457829 kubelet[2655]: I0813 00:03:02.457814 2655 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:03:02.458145 kubelet[2655]: I0813 00:03:02.458127 2655 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:03:02.459501 kubelet[2655]: I0813 00:03:02.459482 2655 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 00:03:02.459891 kubelet[2655]: E0813 00:03:02.459867 2655 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-a-dd293077f6\" not found" Aug 13 00:03:02.460428 kubelet[2655]: I0813 00:03:02.460409 2655 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 00:03:02.460623 kubelet[2655]: I0813 00:03:02.460612 2655 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:03:02.419000 audit[2655]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000379170 a1=400064de78 a2=40003790b0 a3=25 items=0 ppid=1 pid=2655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:02.488129 kubelet[2655]: I0813 00:03:02.488093 2655 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:03:02.488750 kernel: audit: type=1300 audit(1755043382.419:238): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000379170 a1=400064de78 a2=40003790b0 a3=25 items=0 ppid=1 pid=2655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:02.490335 kubelet[2655]: I0813 00:03:02.490298 2655 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:03:02.490471 kubelet[2655]: I0813 00:03:02.490460 2655 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 00:03:02.490553 kubelet[2655]: I0813 00:03:02.490544 2655 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 00:03:02.491359 kubelet[2655]: E0813 00:03:02.490706 2655 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:03:02.419000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 00:03:02.514832 kubelet[2655]: I0813 00:03:02.507207 2655 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:03:02.514832 kubelet[2655]: I0813 00:03:02.507350 2655 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:03:02.518272 kernel: audit: type=1327 audit(1755043382.419:238): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 00:03:02.440000 audit[2655]: AVC avc: denied { mac_admin } for pid=2655 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:02.539892 kernel: audit: type=1400 audit(1755043382.440:239): avc: denied { mac_admin } for pid=2655 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:02.541951 kubelet[2655]: E0813 00:03:02.541927 2655 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:03:02.542696 kubelet[2655]: I0813 00:03:02.542647 2655 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:03:02.440000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 00:03:02.553354 kernel: audit: type=1401 audit(1755043382.440:239): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 00:03:02.440000 audit[2655]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000936240 a1=400064de90 a2=4000379530 a3=25 items=0 ppid=1 pid=2655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:02.585599 kernel: audit: type=1300 audit(1755043382.440:239): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000936240 a1=400064de90 a2=4000379530 a3=25 items=0 ppid=1 pid=2655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:02.585916 kernel: audit: type=1327 audit(1755043382.440:239): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 00:03:02.440000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 00:03:02.592688 kubelet[2655]: E0813 00:03:02.592632 2655 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 13 00:03:02.629397 kubelet[2655]: I0813 00:03:02.629372 2655 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 00:03:02.629557 kubelet[2655]: I0813 00:03:02.629543 2655 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 00:03:02.629619 kubelet[2655]: I0813 00:03:02.629610 2655 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:03:02.629864 kubelet[2655]: I0813 00:03:02.629846 2655 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 00:03:02.629957 kubelet[2655]: I0813 00:03:02.629931 2655 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 00:03:02.630014 kubelet[2655]: I0813 00:03:02.630005 2655 policy_none.go:49] "None policy: Start" Aug 13 00:03:02.630808 kubelet[2655]: I0813 00:03:02.630792 2655 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 00:03:02.630903 kubelet[2655]: I0813 00:03:02.630893 2655 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:03:02.631106 kubelet[2655]: I0813 00:03:02.631091 2655 state_mem.go:75] "Updated machine memory state" Aug 13 00:03:02.632353 kubelet[2655]: I0813 00:03:02.632332 2655 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:03:02.631000 audit[2655]: AVC avc: denied { mac_admin } for pid=2655 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:02.631000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 00:03:02.631000 audit[2655]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000fdb470 a1=4000fe4210 a2=4000fdb440 a3=25 items=0 ppid=1 pid=2655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:02.631000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 00:03:02.632736 kubelet[2655]: I0813 00:03:02.632718 2655 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Aug 13 00:03:02.632932 kubelet[2655]: I0813 00:03:02.632918 2655 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:03:02.633032 kubelet[2655]: I0813 00:03:02.633001 2655 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:03:02.634914 kubelet[2655]: I0813 00:03:02.634538 2655 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:03:02.742139 kubelet[2655]: I0813 00:03:02.741071 2655 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:02.755954 kubelet[2655]: I0813 00:03:02.755921 2655 kubelet_node_status.go:111] "Node was previously registered" node="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:02.756174 kubelet[2655]: I0813 00:03:02.756164 2655 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:02.807720 kubelet[2655]: W0813 00:03:02.807597 2655 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 00:03:02.811806 kubelet[2655]: W0813 00:03:02.811614 2655 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 00:03:02.812023 kubelet[2655]: W0813 00:03:02.811965 2655 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 00:03:02.878337 kubelet[2655]: I0813 00:03:02.878294 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8dca2b2e0ee88b447d5a5b3e53201283-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-a-dd293077f6\" (UID: \"8dca2b2e0ee88b447d5a5b3e53201283\") " pod="kube-system/kube-apiserver-ci-3510.3.8-a-dd293077f6" Aug 13 00:03:02.878490 kubelet[2655]: I0813 00:03:02.878361 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/02e7e21fb8dd2d31422d595630052744-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-a-dd293077f6\" (UID: \"02e7e21fb8dd2d31422d595630052744\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-dd293077f6" Aug 13 00:03:02.878490 kubelet[2655]: I0813 00:03:02.878385 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8dca2b2e0ee88b447d5a5b3e53201283-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-a-dd293077f6\" (UID: \"8dca2b2e0ee88b447d5a5b3e53201283\") " pod="kube-system/kube-apiserver-ci-3510.3.8-a-dd293077f6" Aug 13 00:03:02.878490 kubelet[2655]: I0813 00:03:02.878402 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8dca2b2e0ee88b447d5a5b3e53201283-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-a-dd293077f6\" (UID: \"8dca2b2e0ee88b447d5a5b3e53201283\") " pod="kube-system/kube-apiserver-ci-3510.3.8-a-dd293077f6" Aug 13 00:03:02.878490 kubelet[2655]: I0813 00:03:02.878435 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/02e7e21fb8dd2d31422d595630052744-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-a-dd293077f6\" (UID: \"02e7e21fb8dd2d31422d595630052744\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-dd293077f6" Aug 13 00:03:02.878490 kubelet[2655]: I0813 00:03:02.878453 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/02e7e21fb8dd2d31422d595630052744-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-a-dd293077f6\" (UID: \"02e7e21fb8dd2d31422d595630052744\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-dd293077f6" Aug 13 00:03:02.878613 kubelet[2655]: I0813 00:03:02.878468 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/02e7e21fb8dd2d31422d595630052744-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-a-dd293077f6\" (UID: \"02e7e21fb8dd2d31422d595630052744\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-dd293077f6" Aug 13 00:03:02.878613 kubelet[2655]: I0813 00:03:02.878484 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/02e7e21fb8dd2d31422d595630052744-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-a-dd293077f6\" (UID: \"02e7e21fb8dd2d31422d595630052744\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-dd293077f6" Aug 13 00:03:02.878613 kubelet[2655]: I0813 00:03:02.878516 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/38417e2e82f35901f1470743ad919a4c-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-a-dd293077f6\" (UID: \"38417e2e82f35901f1470743ad919a4c\") " pod="kube-system/kube-scheduler-ci-3510.3.8-a-dd293077f6" Aug 13 00:03:03.412773 kubelet[2655]: I0813 00:03:03.412741 2655 apiserver.go:52] "Watching apiserver" Aug 13 00:03:03.461200 kubelet[2655]: I0813 00:03:03.461159 2655 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 00:03:03.556409 kubelet[2655]: I0813 00:03:03.556350 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.8-a-dd293077f6" podStartSLOduration=1.5563331040000001 podStartE2EDuration="1.556333104s" podCreationTimestamp="2025-08-13 00:03:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:03:03.556140224 +0000 UTC m=+1.235038970" watchObservedRunningTime="2025-08-13 00:03:03.556333104 +0000 UTC m=+1.235231850" Aug 13 00:03:03.556725 kubelet[2655]: I0813 00:03:03.556693 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.8-a-dd293077f6" podStartSLOduration=1.556686944 podStartE2EDuration="1.556686944s" podCreationTimestamp="2025-08-13 00:03:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:03:03.542154898 +0000 UTC m=+1.221053604" watchObservedRunningTime="2025-08-13 00:03:03.556686944 +0000 UTC m=+1.235585690" Aug 13 00:03:03.565226 kubelet[2655]: W0813 00:03:03.565195 2655 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 00:03:03.565427 kubelet[2655]: E0813 00:03:03.565410 2655 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.8-a-dd293077f6\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.8-a-dd293077f6" Aug 13 00:03:03.572505 kubelet[2655]: I0813 00:03:03.572452 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.8-a-dd293077f6" podStartSLOduration=1.5724334309999999 podStartE2EDuration="1.572433431s" podCreationTimestamp="2025-08-13 00:03:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:03:03.57217323 +0000 UTC m=+1.251071936" watchObservedRunningTime="2025-08-13 00:03:03.572433431 +0000 UTC m=+1.251332177" Aug 13 00:03:08.108608 kubelet[2655]: I0813 00:03:08.108567 2655 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 00:03:08.110062 env[1583]: time="2025-08-13T00:03:08.110008998Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 00:03:08.110560 kubelet[2655]: I0813 00:03:08.110536 2655 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 00:03:08.914752 kubelet[2655]: I0813 00:03:08.914708 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ff2bce60-cc6d-4ae6-8d62-48df468e46e5-xtables-lock\") pod \"kube-proxy-99qfr\" (UID: \"ff2bce60-cc6d-4ae6-8d62-48df468e46e5\") " pod="kube-system/kube-proxy-99qfr" Aug 13 00:03:08.914968 kubelet[2655]: I0813 00:03:08.914950 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ff2bce60-cc6d-4ae6-8d62-48df468e46e5-kube-proxy\") pod \"kube-proxy-99qfr\" (UID: \"ff2bce60-cc6d-4ae6-8d62-48df468e46e5\") " pod="kube-system/kube-proxy-99qfr" Aug 13 00:03:08.915068 kubelet[2655]: I0813 00:03:08.915055 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ff2bce60-cc6d-4ae6-8d62-48df468e46e5-lib-modules\") pod \"kube-proxy-99qfr\" (UID: \"ff2bce60-cc6d-4ae6-8d62-48df468e46e5\") " pod="kube-system/kube-proxy-99qfr" Aug 13 00:03:08.915194 kubelet[2655]: I0813 00:03:08.915176 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tntsw\" (UniqueName: \"kubernetes.io/projected/ff2bce60-cc6d-4ae6-8d62-48df468e46e5-kube-api-access-tntsw\") pod \"kube-proxy-99qfr\" (UID: \"ff2bce60-cc6d-4ae6-8d62-48df468e46e5\") " pod="kube-system/kube-proxy-99qfr" Aug 13 00:03:09.025131 kubelet[2655]: I0813 00:03:09.025099 2655 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Aug 13 00:03:09.164888 env[1583]: time="2025-08-13T00:03:09.164774629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-99qfr,Uid:ff2bce60-cc6d-4ae6-8d62-48df468e46e5,Namespace:kube-system,Attempt:0,}" Aug 13 00:03:09.199975 env[1583]: time="2025-08-13T00:03:09.198100761Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:03:09.199975 env[1583]: time="2025-08-13T00:03:09.198147521Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:03:09.199975 env[1583]: time="2025-08-13T00:03:09.198163481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:03:09.199975 env[1583]: time="2025-08-13T00:03:09.198280801Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/48913504c92b25a95d7350598d201caede79fb2730a7459b28a8d1e27301c1fb pid=2703 runtime=io.containerd.runc.v2 Aug 13 00:03:09.216783 kubelet[2655]: I0813 00:03:09.216684 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e8bad7ea-9941-43e6-97fc-fd9e045db6ac-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-jsq9f\" (UID: \"e8bad7ea-9941-43e6-97fc-fd9e045db6ac\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-jsq9f" Aug 13 00:03:09.216783 kubelet[2655]: I0813 00:03:09.216729 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k87pp\" (UniqueName: \"kubernetes.io/projected/e8bad7ea-9941-43e6-97fc-fd9e045db6ac-kube-api-access-k87pp\") pod \"tigera-operator-5bf8dfcb4-jsq9f\" (UID: \"e8bad7ea-9941-43e6-97fc-fd9e045db6ac\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-jsq9f" Aug 13 00:03:09.257003 env[1583]: time="2025-08-13T00:03:09.256954543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-99qfr,Uid:ff2bce60-cc6d-4ae6-8d62-48df468e46e5,Namespace:kube-system,Attempt:0,} returns sandbox id \"48913504c92b25a95d7350598d201caede79fb2730a7459b28a8d1e27301c1fb\"" Aug 13 00:03:09.261744 env[1583]: time="2025-08-13T00:03:09.260781464Z" level=info msg="CreateContainer within sandbox \"48913504c92b25a95d7350598d201caede79fb2730a7459b28a8d1e27301c1fb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 00:03:09.307092 env[1583]: time="2025-08-13T00:03:09.307020321Z" level=info msg="CreateContainer within sandbox \"48913504c92b25a95d7350598d201caede79fb2730a7459b28a8d1e27301c1fb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c910cfaf0637f4ba12bd2820366607a528345e10bc057154feb73150789c64bf\"" Aug 13 00:03:09.309870 env[1583]: time="2025-08-13T00:03:09.308915721Z" level=info msg="StartContainer for \"c910cfaf0637f4ba12bd2820366607a528345e10bc057154feb73150789c64bf\"" Aug 13 00:03:09.368417 env[1583]: time="2025-08-13T00:03:09.368366383Z" level=info msg="StartContainer for \"c910cfaf0637f4ba12bd2820366607a528345e10bc057154feb73150789c64bf\" returns successfully" Aug 13 00:03:09.468410 env[1583]: time="2025-08-13T00:03:09.468295739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-jsq9f,Uid:e8bad7ea-9941-43e6-97fc-fd9e045db6ac,Namespace:tigera-operator,Attempt:0,}" Aug 13 00:03:09.497833 env[1583]: time="2025-08-13T00:03:09.497521590Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:03:09.497833 env[1583]: time="2025-08-13T00:03:09.497588870Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:03:09.497833 env[1583]: time="2025-08-13T00:03:09.497607430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:03:09.498186 env[1583]: time="2025-08-13T00:03:09.498111870Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/56cf79d706b75f89b8efee82e45983961c6847dbb9ee30eaa3f0b68794f292e7 pid=2794 runtime=io.containerd.runc.v2 Aug 13 00:03:09.545000 audit[2840]: NETFILTER_CFG table=mangle:41 family=10 entries=1 op=nft_register_chain pid=2840 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:03:09.551397 kernel: kauditd_printk_skb: 4 callbacks suppressed Aug 13 00:03:09.551508 kernel: audit: type=1325 audit(1755043389.545:241): table=mangle:41 family=10 entries=1 op=nft_register_chain pid=2840 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:03:09.545000 audit[2840]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffb95df70 a2=0 a3=1 items=0 ppid=2755 pid=2840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:09.602787 kernel: audit: type=1300 audit(1755043389.545:241): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffb95df70 a2=0 a3=1 items=0 ppid=2755 pid=2840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:09.545000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Aug 13 00:03:09.621127 kernel: audit: type=1327 audit(1755043389.545:241): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Aug 13 00:03:09.568000 audit[2839]: NETFILTER_CFG table=mangle:42 family=2 entries=1 op=nft_register_chain pid=2839 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:03:09.638340 kernel: audit: type=1325 audit(1755043389.568:242): table=mangle:42 family=2 entries=1 op=nft_register_chain pid=2839 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:03:09.640870 env[1583]: time="2025-08-13T00:03:09.640690482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-jsq9f,Uid:e8bad7ea-9941-43e6-97fc-fd9e045db6ac,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"56cf79d706b75f89b8efee82e45983961c6847dbb9ee30eaa3f0b68794f292e7\"" Aug 13 00:03:09.568000 audit[2839]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffceb3dab0 a2=0 a3=1 items=0 ppid=2755 pid=2839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:09.674377 kernel: audit: type=1300 audit(1755043389.568:242): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffceb3dab0 a2=0 a3=1 items=0 ppid=2755 pid=2839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:09.676686 env[1583]: time="2025-08-13T00:03:09.676287455Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 13 00:03:09.568000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Aug 13 00:03:09.693823 kernel: audit: type=1327 audit(1755043389.568:242): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Aug 13 00:03:09.570000 audit[2843]: NETFILTER_CFG table=nat:43 family=10 entries=1 op=nft_register_chain pid=2843 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:03:09.710466 kernel: audit: type=1325 audit(1755043389.570:243): table=nat:43 family=10 entries=1 op=nft_register_chain pid=2843 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:03:09.570000 audit[2843]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff46d3aa0 a2=0 a3=1 items=0 ppid=2755 pid=2843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:09.742758 kernel: audit: type=1300 audit(1755043389.570:243): arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff46d3aa0 a2=0 a3=1 items=0 ppid=2755 pid=2843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:09.570000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Aug 13 00:03:09.758614 kernel: audit: type=1327 audit(1755043389.570:243): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Aug 13 00:03:09.571000 audit[2844]: NETFILTER_CFG table=nat:44 family=2 entries=1 op=nft_register_chain pid=2844 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:03:09.773601 kernel: audit: type=1325 audit(1755043389.571:244): table=nat:44 family=2 entries=1 op=nft_register_chain pid=2844 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:03:09.571000 audit[2844]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffce6d2870 a2=0 a3=1 items=0 ppid=2755 pid=2844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:09.571000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Aug 13 00:03:09.573000 audit[2845]: NETFILTER_CFG table=filter:45 family=10 entries=1 op=nft_register_chain pid=2845 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:03:09.573000 audit[2845]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdf8dd550 a2=0 a3=1 items=0 ppid=2755 pid=2845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:09.573000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Aug 13 00:03:09.609000 audit[2846]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=2846 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:03:09.609000 audit[2846]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe875cb20 a2=0 a3=1 items=0 ppid=2755 pid=2846 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:09.609000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Aug 13 00:03:09.638000 audit[2853]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2853 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:03:09.638000 audit[2853]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffcb5111c0 a2=0 a3=1 items=0 ppid=2755 pid=2853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:09.638000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Aug 13 00:03:09.651000 audit[2855]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2855 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:03:09.651000 audit[2855]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffc133daa0 a2=0 a3=1 items=0 ppid=2755 pid=2855 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:09.651000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Aug 13 00:03:09.656000 audit[2858]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_rule pid=2858 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:03:09.656000 audit[2858]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffd1a78940 a2=0 a3=1 items=0 ppid=2755 pid=2858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:09.656000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Aug 13 00:03:09.656000 audit[2859]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_chain pid=2859 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:03:09.656000 audit[2859]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffecd6add0 a2=0 a3=1 items=0 ppid=2755 pid=2859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:09.656000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Aug 13 00:03:09.661000 audit[2861]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2861 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:03:09.661000 audit[2861]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff8c377c0 a2=0 a3=1 items=0 ppid=2755 pid=2861 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:09.661000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Aug 13 00:03:09.666000 audit[2862]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2862 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:03:09.666000 audit[2862]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffce09e7d0 a2=0 a3=1 items=0 ppid=2755 pid=2862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:09.666000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Aug 13 00:03:09.666000 audit[2864]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2864 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:03:09.666000 audit[2864]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffd6f71ea0 a2=0 a3=1 items=0 ppid=2755 pid=2864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:09.666000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Aug 13 00:03:09.681000 audit[2867]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_rule pid=2867 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:03:09.681000 audit[2867]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffe7d88f40 a2=0 a3=1 items=0 ppid=2755 pid=2867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:09.681000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Aug 13 00:03:09.686000 audit[2868]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_chain pid=2868 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:03:09.686000 audit[2868]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffedbe0490 a2=0 a3=1 items=0 ppid=2755 pid=2868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:09.686000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Aug 13 00:03:09.740000 audit[2870]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2870 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:03:09.740000 audit[2870]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffffc17a5c0 a2=0 a3=1 items=0 ppid=2755 pid=2870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:09.740000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Aug 13 00:03:09.740000 audit[2871]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_chain pid=2871 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:03:09.740000 audit[2871]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd8500820 a2=0 a3=1 items=0 ppid=2755 pid=2871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:09.740000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Aug 13 00:03:09.743000 audit[2873]: NETFILTER_CFG table=filter:58 family=2 entries=1 op=nft_register_rule pid=2873 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:03:09.743000 audit[2873]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff292e680 a2=0 a3=1 items=0 ppid=2755 pid=2873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:09.743000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Aug 13 00:03:09.754000 audit[2876]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_rule pid=2876 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:03:09.754000 audit[2876]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc43f3df0 a2=0 a3=1 items=0 ppid=2755 pid=2876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:09.754000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Aug 13 00:03:09.758000 audit[2879]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_rule pid=2879 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:03:09.758000 audit[2879]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe71613c0 a2=0 a3=1 items=0 ppid=2755 pid=2879 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:09.758000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Aug 13 00:03:09.776000 audit[2880]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2880 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:03:09.776000 audit[2880]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc70d7690 a2=0 a3=1 items=0 ppid=2755 pid=2880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:09.776000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Aug 13 00:03:09.778000 audit[2882]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2882 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:03:09.778000 audit[2882]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffc01a0b00 a2=0 a3=1 items=0 ppid=2755 pid=2882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:09.778000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Aug 13 00:03:09.782000 audit[2885]: NETFILTER_CFG table=nat:63 family=2 entries=1 op=nft_register_rule pid=2885 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:03:09.782000 audit[2885]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc30dc300 a2=0 a3=1 items=0 ppid=2755 pid=2885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:09.782000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Aug 13 00:03:09.785000 audit[2886]: NETFILTER_CFG table=nat:64 family=2 entries=1 op=nft_register_chain pid=2886 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:03:09.785000 audit[2886]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff6e850f0 a2=0 a3=1 items=0 ppid=2755 pid=2886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:09.785000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Aug 13 00:03:09.788000 audit[2888]: NETFILTER_CFG table=nat:65 family=2 entries=1 op=nft_register_rule pid=2888 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:03:09.788000 audit[2888]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffe56a40a0 a2=0 a3=1 items=0 ppid=2755 pid=2888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:09.788000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Aug 13 00:03:09.892000 audit[2894]: NETFILTER_CFG table=filter:66 family=2 entries=8 op=nft_register_rule pid=2894 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:03:09.892000 audit[2894]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe1e54610 a2=0 a3=1 items=0 ppid=2755 pid=2894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:09.892000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:03:09.930000 audit[2894]: NETFILTER_CFG table=nat:67 family=2 entries=14 op=nft_register_chain pid=2894 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:03:09.930000 audit[2894]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=ffffe1e54610 a2=0 a3=1 items=0 ppid=2755 pid=2894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:09.930000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:03:09.931000 audit[2899]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2899 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:03:09.931000 audit[2899]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffc4344740 a2=0 a3=1 items=0 ppid=2755 pid=2899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:09.931000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Aug 13 00:03:09.934000 audit[2901]: NETFILTER_CFG table=filter:69 family=10 entries=2 op=nft_register_chain pid=2901 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:03:09.934000 audit[2901]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffd9790950 a2=0 a3=1 items=0 ppid=2755 pid=2901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:09.934000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Aug 13 00:03:09.939000 audit[2904]: NETFILTER_CFG table=filter:70 family=10 entries=2 op=nft_register_chain pid=2904 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:03:09.939000 audit[2904]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffd02c1e60 a2=0 a3=1 items=0 ppid=2755 pid=2904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:09.939000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Aug 13 00:03:09.940000 audit[2905]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_chain pid=2905 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:03:09.940000 audit[2905]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd7bd7340 a2=0 a3=1 items=0 ppid=2755 pid=2905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:09.940000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Aug 13 00:03:09.943000 audit[2907]: NETFILTER_CFG table=filter:72 family=10 entries=1 op=nft_register_rule pid=2907 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:03:09.943000 audit[2907]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc1983a30 a2=0 a3=1 items=0 ppid=2755 pid=2907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:09.943000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Aug 13 00:03:09.944000 audit[2908]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2908 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:03:09.944000 audit[2908]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd8ee7fe0 a2=0 a3=1 items=0 ppid=2755 pid=2908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:09.944000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Aug 13 00:03:09.947000 audit[2910]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2910 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:03:09.947000 audit[2910]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffc04d3fc0 a2=0 a3=1 items=0 ppid=2755 pid=2910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:09.947000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Aug 13 00:03:09.951000 audit[2913]: NETFILTER_CFG table=filter:75 family=10 entries=2 op=nft_register_chain pid=2913 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:03:09.951000 audit[2913]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffd41991a0 a2=0 a3=1 items=0 ppid=2755 pid=2913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:09.951000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Aug 13 00:03:09.952000 audit[2914]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_chain pid=2914 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:03:09.952000 audit[2914]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe1ec9e20 a2=0 a3=1 items=0 ppid=2755 pid=2914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:09.952000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Aug 13 00:03:09.954000 audit[2916]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2916 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:03:09.954000 audit[2916]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff55438d0 a2=0 a3=1 items=0 ppid=2755 pid=2916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:09.954000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Aug 13 00:03:09.955000 audit[2917]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_chain pid=2917 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:03:09.955000 audit[2917]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffeaec53d0 a2=0 a3=1 items=0 ppid=2755 pid=2917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:09.955000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Aug 13 00:03:09.958000 audit[2919]: NETFILTER_CFG table=filter:79 family=10 entries=1 op=nft_register_rule pid=2919 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:03:09.958000 audit[2919]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff1f812a0 a2=0 a3=1 items=0 ppid=2755 pid=2919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:09.958000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Aug 13 00:03:09.961000 audit[2922]: NETFILTER_CFG table=filter:80 family=10 entries=1 op=nft_register_rule pid=2922 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:03:09.961000 audit[2922]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe4315080 a2=0 a3=1 items=0 ppid=2755 pid=2922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:09.961000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Aug 13 00:03:09.965000 audit[2925]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_rule pid=2925 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:03:09.965000 audit[2925]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe99ab330 a2=0 a3=1 items=0 ppid=2755 pid=2925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:09.965000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Aug 13 00:03:09.966000 audit[2926]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2926 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:03:09.966000 audit[2926]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe83accd0 a2=0 a3=1 items=0 ppid=2755 pid=2926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:09.966000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Aug 13 00:03:09.968000 audit[2928]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2928 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:03:09.968000 audit[2928]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffffca8630 a2=0 a3=1 items=0 ppid=2755 pid=2928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:09.968000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Aug 13 00:03:09.971000 audit[2931]: NETFILTER_CFG table=nat:84 family=10 entries=2 op=nft_register_chain pid=2931 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:03:09.971000 audit[2931]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=fffffe16d4c0 a2=0 a3=1 items=0 ppid=2755 pid=2931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:09.971000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Aug 13 00:03:09.972000 audit[2932]: NETFILTER_CFG table=nat:85 family=10 entries=1 op=nft_register_chain pid=2932 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:03:09.972000 audit[2932]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff34fda30 a2=0 a3=1 items=0 ppid=2755 pid=2932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:09.972000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Aug 13 00:03:09.975000 audit[2934]: NETFILTER_CFG table=nat:86 family=10 entries=2 op=nft_register_chain pid=2934 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:03:09.975000 audit[2934]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffd5a22160 a2=0 a3=1 items=0 ppid=2755 pid=2934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:09.975000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Aug 13 00:03:09.977000 audit[2935]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_chain pid=2935 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:03:09.977000 audit[2935]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd11c9490 a2=0 a3=1 items=0 ppid=2755 pid=2935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:09.977000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Aug 13 00:03:09.979000 audit[2937]: NETFILTER_CFG table=filter:88 family=10 entries=1 op=nft_register_rule pid=2937 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:03:09.979000 audit[2937]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffcd323190 a2=0 a3=1 items=0 ppid=2755 pid=2937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:09.979000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Aug 13 00:03:09.982000 audit[2940]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_rule pid=2940 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:03:09.982000 audit[2940]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffffcc1ca00 a2=0 a3=1 items=0 ppid=2755 pid=2940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:09.982000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Aug 13 00:03:09.985000 audit[2942]: NETFILTER_CFG table=filter:90 family=10 entries=3 op=nft_register_rule pid=2942 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Aug 13 00:03:09.985000 audit[2942]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=ffffef5005b0 a2=0 a3=1 items=0 ppid=2755 pid=2942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:09.985000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:03:09.985000 audit[2942]: NETFILTER_CFG table=nat:91 family=10 entries=7 op=nft_register_chain pid=2942 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Aug 13 00:03:09.985000 audit[2942]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffef5005b0 a2=0 a3=1 items=0 ppid=2755 pid=2942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:09.985000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:03:10.032639 systemd[1]: run-containerd-runc-k8s.io-48913504c92b25a95d7350598d201caede79fb2730a7459b28a8d1e27301c1fb-runc.wJs7Ty.mount: Deactivated successfully. Aug 13 00:03:11.658419 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1552150176.mount: Deactivated successfully. Aug 13 00:03:12.241254 env[1583]: time="2025-08-13T00:03:12.241202315Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.38.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:12.246200 env[1583]: time="2025-08-13T00:03:12.246162997Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:12.250910 env[1583]: time="2025-08-13T00:03:12.250876439Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.38.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:12.254826 env[1583]: time="2025-08-13T00:03:12.254795600Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:12.255486 env[1583]: time="2025-08-13T00:03:12.255448680Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Aug 13 00:03:12.258509 env[1583]: time="2025-08-13T00:03:12.258424681Z" level=info msg="CreateContainer within sandbox \"56cf79d706b75f89b8efee82e45983961c6847dbb9ee30eaa3f0b68794f292e7\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 13 00:03:12.276193 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1742660056.mount: Deactivated successfully. Aug 13 00:03:12.292427 env[1583]: time="2025-08-13T00:03:12.292378093Z" level=info msg="CreateContainer within sandbox \"56cf79d706b75f89b8efee82e45983961c6847dbb9ee30eaa3f0b68794f292e7\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a6da38dc6ff11ef735cfa7a67694b56cc759e5f10bf80476e83300d99a067be6\"" Aug 13 00:03:12.294802 env[1583]: time="2025-08-13T00:03:12.293131413Z" level=info msg="StartContainer for \"a6da38dc6ff11ef735cfa7a67694b56cc759e5f10bf80476e83300d99a067be6\"" Aug 13 00:03:12.347521 env[1583]: time="2025-08-13T00:03:12.347472391Z" level=info msg="StartContainer for \"a6da38dc6ff11ef735cfa7a67694b56cc759e5f10bf80476e83300d99a067be6\" returns successfully" Aug 13 00:03:12.590246 kubelet[2655]: I0813 00:03:12.589538 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-99qfr" podStartSLOduration=4.589520553 podStartE2EDuration="4.589520553s" podCreationTimestamp="2025-08-13 00:03:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:03:09.68990974 +0000 UTC m=+7.368808486" watchObservedRunningTime="2025-08-13 00:03:12.589520553 +0000 UTC m=+10.268419259" Aug 13 00:03:12.624994 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1482099073.mount: Deactivated successfully. Aug 13 00:03:13.755064 kubelet[2655]: I0813 00:03:13.755009 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-jsq9f" podStartSLOduration=2.173854435 podStartE2EDuration="4.754981981s" podCreationTimestamp="2025-08-13 00:03:09 +0000 UTC" firstStartedPulling="2025-08-13 00:03:09.675643775 +0000 UTC m=+7.354542521" lastFinishedPulling="2025-08-13 00:03:12.256771321 +0000 UTC m=+9.935670067" observedRunningTime="2025-08-13 00:03:12.589866193 +0000 UTC m=+10.268764939" watchObservedRunningTime="2025-08-13 00:03:13.754981981 +0000 UTC m=+11.433880727" Aug 13 00:03:18.530220 sudo[2014]: pam_unix(sudo:session): session closed for user root Aug 13 00:03:18.529000 audit[2014]: USER_END pid=2014 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 00:03:18.536338 kernel: kauditd_printk_skb: 143 callbacks suppressed Aug 13 00:03:18.536476 kernel: audit: type=1106 audit(1755043398.529:292): pid=2014 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 00:03:18.529000 audit[2014]: CRED_DISP pid=2014 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 00:03:18.581586 kernel: audit: type=1104 audit(1755043398.529:293): pid=2014 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 00:03:18.644212 sshd[1971]: pam_unix(sshd:session): session closed for user core Aug 13 00:03:18.645000 audit[1971]: USER_END pid=1971 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:03:18.675886 systemd[1]: sshd@6-10.200.20.35:22-10.200.16.10:42974.service: Deactivated successfully. Aug 13 00:03:18.677263 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 00:03:18.677612 systemd-logind[1566]: Session 9 logged out. Waiting for processes to exit. Aug 13 00:03:18.678518 systemd-logind[1566]: Removed session 9. Aug 13 00:03:18.645000 audit[1971]: CRED_DISP pid=1971 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:03:18.709450 kernel: audit: type=1106 audit(1755043398.645:294): pid=1971 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:03:18.709585 kernel: audit: type=1104 audit(1755043398.645:295): pid=1971 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:03:18.675000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.35:22-10.200.16.10:42974 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:18.733366 kernel: audit: type=1131 audit(1755043398.675:296): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.35:22-10.200.16.10:42974 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:21.048000 audit[3027]: NETFILTER_CFG table=filter:92 family=2 entries=15 op=nft_register_rule pid=3027 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:03:21.048000 audit[3027]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffd25ab270 a2=0 a3=1 items=0 ppid=2755 pid=3027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:21.103215 kernel: audit: type=1325 audit(1755043401.048:297): table=filter:92 family=2 entries=15 op=nft_register_rule pid=3027 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:03:21.103362 kernel: audit: type=1300 audit(1755043401.048:297): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffd25ab270 a2=0 a3=1 items=0 ppid=2755 pid=3027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:21.048000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:03:21.127704 kernel: audit: type=1327 audit(1755043401.048:297): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:03:21.063000 audit[3027]: NETFILTER_CFG table=nat:93 family=2 entries=12 op=nft_register_rule pid=3027 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:03:21.150108 kernel: audit: type=1325 audit(1755043401.063:298): table=nat:93 family=2 entries=12 op=nft_register_rule pid=3027 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:03:21.063000 audit[3027]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd25ab270 a2=0 a3=1 items=0 ppid=2755 pid=3027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:21.190080 kernel: audit: type=1300 audit(1755043401.063:298): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd25ab270 a2=0 a3=1 items=0 ppid=2755 pid=3027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:21.063000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:03:21.110000 audit[3029]: NETFILTER_CFG table=filter:94 family=2 entries=16 op=nft_register_rule pid=3029 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:03:21.110000 audit[3029]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffd0195e70 a2=0 a3=1 items=0 ppid=2755 pid=3029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:21.110000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:03:21.129000 audit[3029]: NETFILTER_CFG table=nat:95 family=2 entries=12 op=nft_register_rule pid=3029 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:03:21.129000 audit[3029]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd0195e70 a2=0 a3=1 items=0 ppid=2755 pid=3029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:21.129000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:03:26.301000 audit[3031]: NETFILTER_CFG table=filter:96 family=2 entries=17 op=nft_register_rule pid=3031 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:03:26.307524 kernel: kauditd_printk_skb: 7 callbacks suppressed Aug 13 00:03:26.307642 kernel: audit: type=1325 audit(1755043406.301:301): table=filter:96 family=2 entries=17 op=nft_register_rule pid=3031 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:03:26.301000 audit[3031]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffeb3b6d90 a2=0 a3=1 items=0 ppid=2755 pid=3031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:26.355465 kernel: audit: type=1300 audit(1755043406.301:301): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffeb3b6d90 a2=0 a3=1 items=0 ppid=2755 pid=3031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:26.301000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:03:26.374121 kernel: audit: type=1327 audit(1755043406.301:301): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:03:26.417000 audit[3031]: NETFILTER_CFG table=nat:97 family=2 entries=12 op=nft_register_rule pid=3031 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:03:26.438686 kernel: audit: type=1325 audit(1755043406.417:302): table=nat:97 family=2 entries=12 op=nft_register_rule pid=3031 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:03:26.438787 kernel: audit: type=1300 audit(1755043406.417:302): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffeb3b6d90 a2=0 a3=1 items=0 ppid=2755 pid=3031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:26.417000 audit[3031]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffeb3b6d90 a2=0 a3=1 items=0 ppid=2755 pid=3031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:26.456347 kubelet[2655]: I0813 00:03:26.456301 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46h9r\" (UniqueName: \"kubernetes.io/projected/0cba09d7-da92-4cb2-90df-1bcfab59daad-kube-api-access-46h9r\") pod \"calico-typha-6f56d4f4dc-n9845\" (UID: \"0cba09d7-da92-4cb2-90df-1bcfab59daad\") " pod="calico-system/calico-typha-6f56d4f4dc-n9845" Aug 13 00:03:26.456880 kubelet[2655]: I0813 00:03:26.456851 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0cba09d7-da92-4cb2-90df-1bcfab59daad-tigera-ca-bundle\") pod \"calico-typha-6f56d4f4dc-n9845\" (UID: \"0cba09d7-da92-4cb2-90df-1bcfab59daad\") " pod="calico-system/calico-typha-6f56d4f4dc-n9845" Aug 13 00:03:26.456981 kubelet[2655]: I0813 00:03:26.456967 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/0cba09d7-da92-4cb2-90df-1bcfab59daad-typha-certs\") pod \"calico-typha-6f56d4f4dc-n9845\" (UID: \"0cba09d7-da92-4cb2-90df-1bcfab59daad\") " pod="calico-system/calico-typha-6f56d4f4dc-n9845" Aug 13 00:03:26.417000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:03:26.485820 kernel: audit: type=1327 audit(1755043406.417:302): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:03:26.499000 audit[3034]: NETFILTER_CFG table=filter:98 family=2 entries=19 op=nft_register_rule pid=3034 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:03:26.499000 audit[3034]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffd20f59a0 a2=0 a3=1 items=0 ppid=2755 pid=3034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:26.553303 kernel: audit: type=1325 audit(1755043406.499:303): table=filter:98 family=2 entries=19 op=nft_register_rule pid=3034 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:03:26.553452 kernel: audit: type=1300 audit(1755043406.499:303): arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffd20f59a0 a2=0 a3=1 items=0 ppid=2755 pid=3034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:26.499000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:03:26.568079 kernel: audit: type=1327 audit(1755043406.499:303): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:03:26.524000 audit[3034]: NETFILTER_CFG table=nat:99 family=2 entries=12 op=nft_register_rule pid=3034 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:03:26.610637 kernel: audit: type=1325 audit(1755043406.524:304): table=nat:99 family=2 entries=12 op=nft_register_rule pid=3034 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:03:26.524000 audit[3034]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd20f59a0 a2=0 a3=1 items=0 ppid=2755 pid=3034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:26.524000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:03:26.678240 env[1583]: time="2025-08-13T00:03:26.678192778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6f56d4f4dc-n9845,Uid:0cba09d7-da92-4cb2-90df-1bcfab59daad,Namespace:calico-system,Attempt:0,}" Aug 13 00:03:26.708061 env[1583]: time="2025-08-13T00:03:26.707967705Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:03:26.708061 env[1583]: time="2025-08-13T00:03:26.708058425Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:03:26.708233 env[1583]: time="2025-08-13T00:03:26.708085385Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:03:26.708261 env[1583]: time="2025-08-13T00:03:26.708229305Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b3181678f1b2aebd57bfbfd533d141e069cb37d5c76516d53ef25dd930db8821 pid=3043 runtime=io.containerd.runc.v2 Aug 13 00:03:26.819891 env[1583]: time="2025-08-13T00:03:26.819745773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6f56d4f4dc-n9845,Uid:0cba09d7-da92-4cb2-90df-1bcfab59daad,Namespace:calico-system,Attempt:0,} returns sandbox id \"b3181678f1b2aebd57bfbfd533d141e069cb37d5c76516d53ef25dd930db8821\"" Aug 13 00:03:26.821884 env[1583]: time="2025-08-13T00:03:26.821842693Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Aug 13 00:03:26.860058 kubelet[2655]: I0813 00:03:26.860011 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/4c4e9a3e-6d11-49dc-9db3-6c9fbc6c473d-cni-log-dir\") pod \"calico-node-kdqm4\" (UID: \"4c4e9a3e-6d11-49dc-9db3-6c9fbc6c473d\") " pod="calico-system/calico-node-kdqm4" Aug 13 00:03:26.860058 kubelet[2655]: I0813 00:03:26.860056 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/4c4e9a3e-6d11-49dc-9db3-6c9fbc6c473d-flexvol-driver-host\") pod \"calico-node-kdqm4\" (UID: \"4c4e9a3e-6d11-49dc-9db3-6c9fbc6c473d\") " pod="calico-system/calico-node-kdqm4" Aug 13 00:03:26.860250 kubelet[2655]: I0813 00:03:26.860075 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/4c4e9a3e-6d11-49dc-9db3-6c9fbc6c473d-var-run-calico\") pod \"calico-node-kdqm4\" (UID: \"4c4e9a3e-6d11-49dc-9db3-6c9fbc6c473d\") " pod="calico-system/calico-node-kdqm4" Aug 13 00:03:26.860250 kubelet[2655]: I0813 00:03:26.860093 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4c4e9a3e-6d11-49dc-9db3-6c9fbc6c473d-xtables-lock\") pod \"calico-node-kdqm4\" (UID: \"4c4e9a3e-6d11-49dc-9db3-6c9fbc6c473d\") " pod="calico-system/calico-node-kdqm4" Aug 13 00:03:26.860250 kubelet[2655]: I0813 00:03:26.860118 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/4c4e9a3e-6d11-49dc-9db3-6c9fbc6c473d-cni-net-dir\") pod \"calico-node-kdqm4\" (UID: \"4c4e9a3e-6d11-49dc-9db3-6c9fbc6c473d\") " pod="calico-system/calico-node-kdqm4" Aug 13 00:03:26.860250 kubelet[2655]: I0813 00:03:26.860134 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/4c4e9a3e-6d11-49dc-9db3-6c9fbc6c473d-node-certs\") pod \"calico-node-kdqm4\" (UID: \"4c4e9a3e-6d11-49dc-9db3-6c9fbc6c473d\") " pod="calico-system/calico-node-kdqm4" Aug 13 00:03:26.860250 kubelet[2655]: I0813 00:03:26.860153 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snl25\" (UniqueName: \"kubernetes.io/projected/4c4e9a3e-6d11-49dc-9db3-6c9fbc6c473d-kube-api-access-snl25\") pod \"calico-node-kdqm4\" (UID: \"4c4e9a3e-6d11-49dc-9db3-6c9fbc6c473d\") " pod="calico-system/calico-node-kdqm4" Aug 13 00:03:26.860375 kubelet[2655]: I0813 00:03:26.860169 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/4c4e9a3e-6d11-49dc-9db3-6c9fbc6c473d-cni-bin-dir\") pod \"calico-node-kdqm4\" (UID: \"4c4e9a3e-6d11-49dc-9db3-6c9fbc6c473d\") " pod="calico-system/calico-node-kdqm4" Aug 13 00:03:26.860375 kubelet[2655]: I0813 00:03:26.860184 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4c4e9a3e-6d11-49dc-9db3-6c9fbc6c473d-lib-modules\") pod \"calico-node-kdqm4\" (UID: \"4c4e9a3e-6d11-49dc-9db3-6c9fbc6c473d\") " pod="calico-system/calico-node-kdqm4" Aug 13 00:03:26.860375 kubelet[2655]: I0813 00:03:26.860201 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/4c4e9a3e-6d11-49dc-9db3-6c9fbc6c473d-policysync\") pod \"calico-node-kdqm4\" (UID: \"4c4e9a3e-6d11-49dc-9db3-6c9fbc6c473d\") " pod="calico-system/calico-node-kdqm4" Aug 13 00:03:26.860375 kubelet[2655]: I0813 00:03:26.860218 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c4e9a3e-6d11-49dc-9db3-6c9fbc6c473d-tigera-ca-bundle\") pod \"calico-node-kdqm4\" (UID: \"4c4e9a3e-6d11-49dc-9db3-6c9fbc6c473d\") " pod="calico-system/calico-node-kdqm4" Aug 13 00:03:26.860375 kubelet[2655]: I0813 00:03:26.860238 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4c4e9a3e-6d11-49dc-9db3-6c9fbc6c473d-var-lib-calico\") pod \"calico-node-kdqm4\" (UID: \"4c4e9a3e-6d11-49dc-9db3-6c9fbc6c473d\") " pod="calico-system/calico-node-kdqm4" Aug 13 00:03:26.888133 kubelet[2655]: E0813 00:03:26.888075 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dc7fc" podUID="8e788131-5ffd-4005-9137-e23c17af1da5" Aug 13 00:03:26.960588 kubelet[2655]: I0813 00:03:26.960542 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8e788131-5ffd-4005-9137-e23c17af1da5-registration-dir\") pod \"csi-node-driver-dc7fc\" (UID: \"8e788131-5ffd-4005-9137-e23c17af1da5\") " pod="calico-system/csi-node-driver-dc7fc" Aug 13 00:03:26.960809 kubelet[2655]: I0813 00:03:26.960794 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4glh5\" (UniqueName: \"kubernetes.io/projected/8e788131-5ffd-4005-9137-e23c17af1da5-kube-api-access-4glh5\") pod \"csi-node-driver-dc7fc\" (UID: \"8e788131-5ffd-4005-9137-e23c17af1da5\") " pod="calico-system/csi-node-driver-dc7fc" Aug 13 00:03:26.960930 kubelet[2655]: I0813 00:03:26.960917 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8e788131-5ffd-4005-9137-e23c17af1da5-kubelet-dir\") pod \"csi-node-driver-dc7fc\" (UID: \"8e788131-5ffd-4005-9137-e23c17af1da5\") " pod="calico-system/csi-node-driver-dc7fc" Aug 13 00:03:26.961052 kubelet[2655]: I0813 00:03:26.961040 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8e788131-5ffd-4005-9137-e23c17af1da5-varrun\") pod \"csi-node-driver-dc7fc\" (UID: \"8e788131-5ffd-4005-9137-e23c17af1da5\") " pod="calico-system/csi-node-driver-dc7fc" Aug 13 00:03:26.961190 kubelet[2655]: I0813 00:03:26.961169 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8e788131-5ffd-4005-9137-e23c17af1da5-socket-dir\") pod \"csi-node-driver-dc7fc\" (UID: \"8e788131-5ffd-4005-9137-e23c17af1da5\") " pod="calico-system/csi-node-driver-dc7fc" Aug 13 00:03:26.962325 kubelet[2655]: E0813 00:03:26.962305 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:03:26.962435 kubelet[2655]: W0813 00:03:26.962421 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:03:26.962540 kubelet[2655]: E0813 00:03:26.962527 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:03:26.962901 kubelet[2655]: E0813 00:03:26.962879 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:03:26.963009 kubelet[2655]: W0813 00:03:26.962996 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:03:26.963083 kubelet[2655]: E0813 00:03:26.963072 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:03:26.963313 kubelet[2655]: E0813 00:03:26.963302 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:03:26.963397 kubelet[2655]: W0813 00:03:26.963384 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:03:26.963486 kubelet[2655]: E0813 00:03:26.963463 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:03:26.965194 kubelet[2655]: E0813 00:03:26.965162 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:03:26.965312 kubelet[2655]: W0813 00:03:26.965297 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:03:26.965409 kubelet[2655]: E0813 00:03:26.965396 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:03:26.965690 kubelet[2655]: E0813 00:03:26.965652 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:03:26.965690 kubelet[2655]: W0813 00:03:26.965683 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:03:26.965785 kubelet[2655]: E0813 00:03:26.965703 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:03:26.966038 kubelet[2655]: E0813 00:03:26.966023 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:03:26.966135 kubelet[2655]: W0813 00:03:26.966122 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:03:26.966210 kubelet[2655]: E0813 00:03:26.966197 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:03:26.966497 kubelet[2655]: E0813 00:03:26.966481 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:03:26.966585 kubelet[2655]: W0813 00:03:26.966573 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:03:26.966654 kubelet[2655]: E0813 00:03:26.966643 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:03:26.967019 kubelet[2655]: E0813 00:03:26.967005 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:03:26.967131 kubelet[2655]: W0813 00:03:26.967120 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:03:26.967209 kubelet[2655]: E0813 00:03:26.967198 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:03:26.983552 kubelet[2655]: E0813 00:03:26.983516 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:03:26.983552 kubelet[2655]: W0813 00:03:26.983539 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:03:26.983552 kubelet[2655]: E0813 00:03:26.983559 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:03:26.992734 kubelet[2655]: E0813 00:03:26.992702 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:03:26.992734 kubelet[2655]: W0813 00:03:26.992723 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:03:26.992914 kubelet[2655]: E0813 00:03:26.992743 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:03:27.062273 kubelet[2655]: E0813 00:03:27.062243 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:03:27.062455 kubelet[2655]: W0813 00:03:27.062441 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:03:27.062537 kubelet[2655]: E0813 00:03:27.062524 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:03:27.062847 kubelet[2655]: E0813 00:03:27.062834 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:03:27.062953 kubelet[2655]: W0813 00:03:27.062941 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:03:27.063029 kubelet[2655]: E0813 00:03:27.063018 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:03:27.063247 kubelet[2655]: E0813 00:03:27.063229 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:03:27.063247 kubelet[2655]: W0813 00:03:27.063245 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:03:27.063356 kubelet[2655]: E0813 00:03:27.063263 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:03:27.063417 kubelet[2655]: E0813 00:03:27.063396 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:03:27.063417 kubelet[2655]: W0813 00:03:27.063413 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:03:27.063491 kubelet[2655]: E0813 00:03:27.063424 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:03:27.063561 kubelet[2655]: E0813 00:03:27.063547 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:03:27.063561 kubelet[2655]: W0813 00:03:27.063559 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:03:27.063642 kubelet[2655]: E0813 00:03:27.063573 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:03:27.063773 kubelet[2655]: E0813 00:03:27.063756 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:03:27.063773 kubelet[2655]: W0813 00:03:27.063770 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:03:27.063848 kubelet[2655]: E0813 00:03:27.063780 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:03:27.063941 kubelet[2655]: E0813 00:03:27.063928 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:03:27.063941 kubelet[2655]: W0813 00:03:27.063939 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:03:27.064024 kubelet[2655]: E0813 00:03:27.063955 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:03:27.064106 kubelet[2655]: E0813 00:03:27.064093 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:03:27.064159 kubelet[2655]: W0813 00:03:27.064106 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:03:27.064159 kubelet[2655]: E0813 00:03:27.064119 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:03:27.064277 kubelet[2655]: E0813 00:03:27.064245 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:03:27.064277 kubelet[2655]: W0813 00:03:27.064257 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:03:27.064376 kubelet[2655]: E0813 00:03:27.064360 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:03:27.064479 kubelet[2655]: E0813 00:03:27.064369 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:03:27.064544 kubelet[2655]: W0813 00:03:27.064532 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:03:27.064719 kubelet[2655]: E0813 00:03:27.064703 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:03:27.064889 kubelet[2655]: E0813 00:03:27.064841 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:03:27.064965 kubelet[2655]: W0813 00:03:27.064952 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:03:27.065051 kubelet[2655]: E0813 00:03:27.065040 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:03:27.065856 kubelet[2655]: E0813 00:03:27.065260 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:03:27.065856 kubelet[2655]: W0813 00:03:27.065278 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:03:27.065856 kubelet[2655]: E0813 00:03:27.065294 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:03:27.065856 kubelet[2655]: E0813 00:03:27.065443 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:03:27.065856 kubelet[2655]: W0813 00:03:27.065452 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:03:27.065856 kubelet[2655]: E0813 00:03:27.065461 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:03:27.065856 kubelet[2655]: E0813 00:03:27.065580 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:03:27.065856 kubelet[2655]: W0813 00:03:27.065587 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:03:27.065856 kubelet[2655]: E0813 00:03:27.065600 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:03:27.065856 kubelet[2655]: E0813 00:03:27.065810 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:03:27.066149 kubelet[2655]: W0813 00:03:27.065827 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:03:27.066149 kubelet[2655]: E0813 00:03:27.065838 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:03:27.066149 kubelet[2655]: E0813 00:03:27.065987 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:03:27.066149 kubelet[2655]: W0813 00:03:27.065995 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:03:27.066149 kubelet[2655]: E0813 00:03:27.066004 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:03:27.066149 kubelet[2655]: E0813 00:03:27.066122 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:03:27.066149 kubelet[2655]: W0813 00:03:27.066129 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:03:27.066149 kubelet[2655]: E0813 00:03:27.066136 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:03:27.066339 kubelet[2655]: E0813 00:03:27.066243 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:03:27.066339 kubelet[2655]: W0813 00:03:27.066249 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:03:27.066339 kubelet[2655]: E0813 00:03:27.066256 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:03:27.066607 kubelet[2655]: E0813 00:03:27.066422 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:03:27.066607 kubelet[2655]: W0813 00:03:27.066437 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:03:27.066607 kubelet[2655]: E0813 00:03:27.066446 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:03:27.068752 kubelet[2655]: E0813 00:03:27.066909 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:03:27.068752 kubelet[2655]: W0813 00:03:27.066928 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:03:27.068752 kubelet[2655]: E0813 00:03:27.066948 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:03:27.068752 kubelet[2655]: E0813 00:03:27.067097 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:03:27.068752 kubelet[2655]: W0813 00:03:27.067104 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:03:27.068752 kubelet[2655]: E0813 00:03:27.067112 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:03:27.068752 kubelet[2655]: E0813 00:03:27.067373 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:03:27.068752 kubelet[2655]: W0813 00:03:27.067382 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:03:27.068752 kubelet[2655]: E0813 00:03:27.067391 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:03:27.068752 kubelet[2655]: E0813 00:03:27.067523 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:03:27.069038 kubelet[2655]: W0813 00:03:27.067533 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:03:27.069038 kubelet[2655]: E0813 00:03:27.067541 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:03:27.069038 kubelet[2655]: E0813 00:03:27.067642 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:03:27.069038 kubelet[2655]: W0813 00:03:27.067648 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:03:27.069038 kubelet[2655]: E0813 00:03:27.067670 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:03:27.069038 kubelet[2655]: E0813 00:03:27.067827 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:03:27.069038 kubelet[2655]: W0813 00:03:27.067835 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:03:27.069038 kubelet[2655]: E0813 00:03:27.067846 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:03:27.069461 env[1583]: time="2025-08-13T00:03:27.069310314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kdqm4,Uid:4c4e9a3e-6d11-49dc-9db3-6c9fbc6c473d,Namespace:calico-system,Attempt:0,}" Aug 13 00:03:27.083960 kubelet[2655]: E0813 00:03:27.083867 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:03:27.083960 kubelet[2655]: W0813 00:03:27.083897 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:03:27.083960 kubelet[2655]: E0813 00:03:27.083915 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:03:27.124110 env[1583]: time="2025-08-13T00:03:27.124020807Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:03:27.124110 env[1583]: time="2025-08-13T00:03:27.124073047Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:03:27.131596 env[1583]: time="2025-08-13T00:03:27.124084927Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:03:27.133116 env[1583]: time="2025-08-13T00:03:27.131872329Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7e1009dbe17cd6a165e32672074587685f382525aed7f29785e6da4af1e14b25 pid=3122 runtime=io.containerd.runc.v2 Aug 13 00:03:27.206217 env[1583]: time="2025-08-13T00:03:27.206159947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kdqm4,Uid:4c4e9a3e-6d11-49dc-9db3-6c9fbc6c473d,Namespace:calico-system,Attempt:0,} returns sandbox id \"7e1009dbe17cd6a165e32672074587685f382525aed7f29785e6da4af1e14b25\"" Aug 13 00:03:27.576901 systemd[1]: run-containerd-runc-k8s.io-b3181678f1b2aebd57bfbfd533d141e069cb37d5c76516d53ef25dd930db8821-runc.8I4mir.mount: Deactivated successfully. Aug 13 00:03:27.607000 audit[3158]: NETFILTER_CFG table=filter:100 family=2 entries=21 op=nft_register_rule pid=3158 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:03:27.607000 audit[3158]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffecbff0a0 a2=0 a3=1 items=0 ppid=2755 pid=3158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:27.607000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:03:27.616000 audit[3158]: NETFILTER_CFG table=nat:101 family=2 entries=12 op=nft_register_rule pid=3158 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:03:27.616000 audit[3158]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffecbff0a0 a2=0 a3=1 items=0 ppid=2755 pid=3158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:27.616000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:03:28.111778 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2572604683.mount: Deactivated successfully. Aug 13 00:03:28.492692 kubelet[2655]: E0813 00:03:28.492217 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dc7fc" podUID="8e788131-5ffd-4005-9137-e23c17af1da5" Aug 13 00:03:28.633000 audit[3160]: NETFILTER_CFG table=filter:102 family=2 entries=22 op=nft_register_rule pid=3160 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:03:28.633000 audit[3160]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffc13e58a0 a2=0 a3=1 items=0 ppid=2755 pid=3160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:28.633000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:03:28.637000 audit[3160]: NETFILTER_CFG table=nat:103 family=2 entries=12 op=nft_register_rule pid=3160 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:03:28.637000 audit[3160]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc13e58a0 a2=0 a3=1 items=0 ppid=2755 pid=3160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:28.637000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:03:28.873549 env[1583]: time="2025-08-13T00:03:28.873442587Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:28.879650 env[1583]: time="2025-08-13T00:03:28.879614068Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:28.883165 env[1583]: time="2025-08-13T00:03:28.883136269Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:28.886348 env[1583]: time="2025-08-13T00:03:28.886315190Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:28.886586 env[1583]: time="2025-08-13T00:03:28.886561590Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Aug 13 00:03:28.889957 env[1583]: time="2025-08-13T00:03:28.889927070Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 13 00:03:28.902572 env[1583]: time="2025-08-13T00:03:28.902492953Z" level=info msg="CreateContainer within sandbox \"b3181678f1b2aebd57bfbfd533d141e069cb37d5c76516d53ef25dd930db8821\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 13 00:03:28.923413 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4026667824.mount: Deactivated successfully. Aug 13 00:03:28.936614 env[1583]: time="2025-08-13T00:03:28.936554162Z" level=info msg="CreateContainer within sandbox \"b3181678f1b2aebd57bfbfd533d141e069cb37d5c76516d53ef25dd930db8821\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b6101aed62e3d5806ad767de4ca01fe1ad0f08a8df7187ad3add0624c1b38146\"" Aug 13 00:03:28.938646 env[1583]: time="2025-08-13T00:03:28.938293802Z" level=info msg="StartContainer for \"b6101aed62e3d5806ad767de4ca01fe1ad0f08a8df7187ad3add0624c1b38146\"" Aug 13 00:03:29.005423 env[1583]: time="2025-08-13T00:03:29.005380698Z" level=info msg="StartContainer for \"b6101aed62e3d5806ad767de4ca01fe1ad0f08a8df7187ad3add0624c1b38146\" returns successfully" Aug 13 00:03:29.635467 kubelet[2655]: I0813 00:03:29.635393 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6f56d4f4dc-n9845" podStartSLOduration=1.569031987 podStartE2EDuration="3.635378324s" podCreationTimestamp="2025-08-13 00:03:26 +0000 UTC" firstStartedPulling="2025-08-13 00:03:26.821413693 +0000 UTC m=+24.500312439" lastFinishedPulling="2025-08-13 00:03:28.88776007 +0000 UTC m=+26.566658776" observedRunningTime="2025-08-13 00:03:29.634720684 +0000 UTC m=+27.313619430" watchObservedRunningTime="2025-08-13 00:03:29.635378324 +0000 UTC m=+27.314277070" Aug 13 00:03:29.651896 kubelet[2655]: E0813 00:03:29.651769 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:03:29.651896 kubelet[2655]: W0813 00:03:29.651792 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:03:29.651896 kubelet[2655]: E0813 00:03:29.651815 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:03:29.652343 kubelet[2655]: E0813 00:03:29.652153 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:03:29.652343 kubelet[2655]: W0813 00:03:29.652167 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:03:29.652343 kubelet[2655]: E0813 00:03:29.652179 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:03:29.652773 kubelet[2655]: E0813 00:03:29.652555 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:03:29.652773 kubelet[2655]: W0813 00:03:29.652568 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:03:29.652773 kubelet[2655]: E0813 00:03:29.652579 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:03:29.653085 kubelet[2655]: E0813 00:03:29.652927 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:03:29.653085 kubelet[2655]: W0813 00:03:29.652940 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:03:29.653085 kubelet[2655]: E0813 00:03:29.652951 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:03:29.653378 kubelet[2655]: E0813 00:03:29.653235 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:03:29.653378 kubelet[2655]: W0813 00:03:29.653248 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:03:29.653378 kubelet[2655]: E0813 00:03:29.653258 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:03:29.653689 kubelet[2655]: E0813 00:03:29.653518 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:03:29.653689 kubelet[2655]: W0813 00:03:29.653531 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:03:29.653689 kubelet[2655]: E0813 00:03:29.653542 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:03:29.654027 kubelet[2655]: E0813 00:03:29.653850 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:03:29.654027 kubelet[2655]: W0813 00:03:29.653867 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:03:29.654027 kubelet[2655]: E0813 00:03:29.653878 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:03:29.654286 kubelet[2655]: E0813 00:03:29.654179 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:03:29.654286 kubelet[2655]: W0813 00:03:29.654190 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:03:29.654286 kubelet[2655]: E0813 00:03:29.654201 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:03:29.654539 kubelet[2655]: E0813 00:03:29.654448 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:03:29.654539 kubelet[2655]: W0813 00:03:29.654459 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:03:29.654539 kubelet[2655]: E0813 00:03:29.654468 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:03:29.654739 kubelet[2655]: E0813 00:03:29.654727 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:03:29.654884 kubelet[2655]: W0813 00:03:29.654795 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:03:29.654884 kubelet[2655]: E0813 00:03:29.654810 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:03:29.655070 kubelet[2655]: E0813 00:03:29.655059 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:03:29.655146 kubelet[2655]: W0813 00:03:29.655129 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:03:29.655238 kubelet[2655]: E0813 00:03:29.655227 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:03:29.655463 kubelet[2655]: E0813 00:03:29.655452 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:03:29.655545 kubelet[2655]: W0813 00:03:29.655533 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:03:29.655609 kubelet[2655]: E0813 00:03:29.655598 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:03:29.655894 kubelet[2655]: E0813 00:03:29.655881 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:03:29.655978 kubelet[2655]: W0813 00:03:29.655966 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:03:29.656054 kubelet[2655]: E0813 00:03:29.656043 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:03:29.656341 kubelet[2655]: E0813 00:03:29.656328 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:03:29.656426 kubelet[2655]: W0813 00:03:29.656414 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:03:29.656488 kubelet[2655]: E0813 00:03:29.656477 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:03:29.656767 kubelet[2655]: E0813 00:03:29.656753 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:03:29.656842 kubelet[2655]: W0813 00:03:29.656831 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:03:29.656916 kubelet[2655]: E0813 00:03:29.656904 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:03:29.685161 kubelet[2655]: E0813 00:03:29.685130 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:03:29.685161 kubelet[2655]: W0813 00:03:29.685153 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:03:29.685347 kubelet[2655]: E0813 00:03:29.685172 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:03:29.685377 kubelet[2655]: E0813 00:03:29.685360 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:03:29.685377 kubelet[2655]: W0813 00:03:29.685370 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:03:29.685422 kubelet[2655]: E0813 00:03:29.685379 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:03:29.685563 kubelet[2655]: E0813 00:03:29.685543 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:03:29.685563 kubelet[2655]: W0813 00:03:29.685559 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:03:29.685653 kubelet[2655]: E0813 00:03:29.685576 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:03:29.685774 kubelet[2655]: E0813 00:03:29.685757 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:03:29.685774 kubelet[2655]: W0813 00:03:29.685772 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:03:29.685860 kubelet[2655]: E0813 00:03:29.685788 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:03:29.685940 kubelet[2655]: E0813 00:03:29.685923 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:03:29.685940 kubelet[2655]: W0813 00:03:29.685936 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:03:29.686009 kubelet[2655]: E0813 00:03:29.685944 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:03:29.686075 kubelet[2655]: E0813 00:03:29.686061 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:03:29.686075 kubelet[2655]: W0813 00:03:29.686073 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:03:29.686137 kubelet[2655]: E0813 00:03:29.686088 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:03:29.686260 kubelet[2655]: E0813 00:03:29.686245 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:03:29.686260 kubelet[2655]: W0813 00:03:29.686258 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:03:29.686328 kubelet[2655]: E0813 00:03:29.686272 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:03:29.686641 kubelet[2655]: E0813 00:03:29.686624 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:03:29.686785 kubelet[2655]: W0813 00:03:29.686770 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:03:29.686865 kubelet[2655]: E0813 00:03:29.686852 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:03:29.687124 kubelet[2655]: E0813 00:03:29.687105 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:03:29.687124 kubelet[2655]: W0813 00:03:29.687121 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:03:29.687222 kubelet[2655]: E0813 00:03:29.687138 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:03:29.687300 kubelet[2655]: E0813 00:03:29.687285 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:03:29.687300 kubelet[2655]: W0813 00:03:29.687297 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:03:29.687378 kubelet[2655]: E0813 00:03:29.687315 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:03:29.687467 kubelet[2655]: E0813 00:03:29.687449 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:03:29.687467 kubelet[2655]: W0813 00:03:29.687462 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:03:29.687467 kubelet[2655]: E0813 00:03:29.687471 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:03:29.687628 kubelet[2655]: E0813 00:03:29.687614 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:03:29.687628 kubelet[2655]: W0813 00:03:29.687626 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:03:29.687762 kubelet[2655]: E0813 00:03:29.687746 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:03:29.687909 kubelet[2655]: E0813 00:03:29.687775 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:03:29.687909 kubelet[2655]: W0813 00:03:29.687900 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:03:29.688049 kubelet[2655]: E0813 00:03:29.688031 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:03:29.688049 kubelet[2655]: W0813 00:03:29.688043 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:03:29.688113 kubelet[2655]: E0813 00:03:29.688051 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:03:29.688187 kubelet[2655]: E0813 00:03:29.688170 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:03:29.688187 kubelet[2655]: W0813 00:03:29.688180 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:03:29.688248 kubelet[2655]: E0813 00:03:29.688188 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:03:29.688347 kubelet[2655]: E0813 00:03:29.688328 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:03:29.688347 kubelet[2655]: W0813 00:03:29.688340 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:03:29.688347 kubelet[2655]: E0813 00:03:29.688348 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:03:29.688581 kubelet[2655]: E0813 00:03:29.688565 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:03:29.688730 kubelet[2655]: E0813 00:03:29.688705 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:03:29.688730 kubelet[2655]: W0813 00:03:29.688723 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:03:29.688840 kubelet[2655]: E0813 00:03:29.688740 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:03:29.688919 kubelet[2655]: E0813 00:03:29.688903 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:03:29.688919 kubelet[2655]: W0813 00:03:29.688917 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:03:29.688973 kubelet[2655]: E0813 00:03:29.688925 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:03:30.108291 env[1583]: time="2025-08-13T00:03:30.108245714Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:30.113032 env[1583]: time="2025-08-13T00:03:30.112999955Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:30.116102 env[1583]: time="2025-08-13T00:03:30.116058276Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:30.119705 env[1583]: time="2025-08-13T00:03:30.119649876Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:30.120192 env[1583]: time="2025-08-13T00:03:30.120163036Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Aug 13 00:03:30.122914 env[1583]: time="2025-08-13T00:03:30.122548437Z" level=info msg="CreateContainer within sandbox \"7e1009dbe17cd6a165e32672074587685f382525aed7f29785e6da4af1e14b25\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 13 00:03:30.146060 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2323766352.mount: Deactivated successfully. Aug 13 00:03:30.167127 env[1583]: time="2025-08-13T00:03:30.167069407Z" level=info msg="CreateContainer within sandbox \"7e1009dbe17cd6a165e32672074587685f382525aed7f29785e6da4af1e14b25\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"18022387a2df7bce8adc7a337140d3743eba58c0b8a35629fa06c7bd2e8f7835\"" Aug 13 00:03:30.169586 env[1583]: time="2025-08-13T00:03:30.169552128Z" level=info msg="StartContainer for \"18022387a2df7bce8adc7a337140d3743eba58c0b8a35629fa06c7bd2e8f7835\"" Aug 13 00:03:30.236075 env[1583]: time="2025-08-13T00:03:30.236019263Z" level=info msg="StartContainer for \"18022387a2df7bce8adc7a337140d3743eba58c0b8a35629fa06c7bd2e8f7835\" returns successfully" Aug 13 00:03:30.491684 kubelet[2655]: E0813 00:03:30.491431 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dc7fc" podUID="8e788131-5ffd-4005-9137-e23c17af1da5" Aug 13 00:03:30.673000 audit[3296]: NETFILTER_CFG table=filter:104 family=2 entries=21 op=nft_register_rule pid=3296 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:03:30.673000 audit[3296]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffd82c9610 a2=0 a3=1 items=0 ppid=2755 pid=3296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:30.673000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:03:30.678000 audit[3296]: NETFILTER_CFG table=nat:105 family=2 entries=19 op=nft_register_chain pid=3296 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:03:30.678000 audit[3296]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffd82c9610 a2=0 a3=1 items=0 ppid=2755 pid=3296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:30.678000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:03:30.893153 systemd[1]: run-containerd-runc-k8s.io-18022387a2df7bce8adc7a337140d3743eba58c0b8a35629fa06c7bd2e8f7835-runc.SYN8sm.mount: Deactivated successfully. Aug 13 00:03:30.893301 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-18022387a2df7bce8adc7a337140d3743eba58c0b8a35629fa06c7bd2e8f7835-rootfs.mount: Deactivated successfully. Aug 13 00:03:31.369649 env[1583]: time="2025-08-13T00:03:31.369596279Z" level=info msg="shim disconnected" id=18022387a2df7bce8adc7a337140d3743eba58c0b8a35629fa06c7bd2e8f7835 Aug 13 00:03:31.370120 env[1583]: time="2025-08-13T00:03:31.370098160Z" level=warning msg="cleaning up after shim disconnected" id=18022387a2df7bce8adc7a337140d3743eba58c0b8a35629fa06c7bd2e8f7835 namespace=k8s.io Aug 13 00:03:31.370229 env[1583]: time="2025-08-13T00:03:31.370213440Z" level=info msg="cleaning up dead shim" Aug 13 00:03:31.377946 env[1583]: time="2025-08-13T00:03:31.377885561Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:03:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3297 runtime=io.containerd.runc.v2\n" Aug 13 00:03:31.628804 env[1583]: time="2025-08-13T00:03:31.628527017Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 13 00:03:32.492002 kubelet[2655]: E0813 00:03:32.491523 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dc7fc" podUID="8e788131-5ffd-4005-9137-e23c17af1da5" Aug 13 00:03:34.333761 env[1583]: time="2025-08-13T00:03:34.333717445Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:34.339273 env[1583]: time="2025-08-13T00:03:34.339234526Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:34.342376 env[1583]: time="2025-08-13T00:03:34.342322686Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:34.345413 env[1583]: time="2025-08-13T00:03:34.345373167Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:34.345827 env[1583]: time="2025-08-13T00:03:34.345794567Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Aug 13 00:03:34.349852 env[1583]: time="2025-08-13T00:03:34.349709128Z" level=info msg="CreateContainer within sandbox \"7e1009dbe17cd6a165e32672074587685f382525aed7f29785e6da4af1e14b25\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 13 00:03:34.375218 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4251199180.mount: Deactivated successfully. Aug 13 00:03:34.390132 env[1583]: time="2025-08-13T00:03:34.390085056Z" level=info msg="CreateContainer within sandbox \"7e1009dbe17cd6a165e32672074587685f382525aed7f29785e6da4af1e14b25\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ab62355563d2bc48baa05109db1293933bf844150478a062f52e3c93dfa669ec\"" Aug 13 00:03:34.392340 env[1583]: time="2025-08-13T00:03:34.392270737Z" level=info msg="StartContainer for \"ab62355563d2bc48baa05109db1293933bf844150478a062f52e3c93dfa669ec\"" Aug 13 00:03:34.452960 env[1583]: time="2025-08-13T00:03:34.452909830Z" level=info msg="StartContainer for \"ab62355563d2bc48baa05109db1293933bf844150478a062f52e3c93dfa669ec\" returns successfully" Aug 13 00:03:34.492865 kubelet[2655]: E0813 00:03:34.492817 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dc7fc" podUID="8e788131-5ffd-4005-9137-e23c17af1da5" Aug 13 00:03:35.728142 env[1583]: time="2025-08-13T00:03:35.728068536Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:03:35.748073 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ab62355563d2bc48baa05109db1293933bf844150478a062f52e3c93dfa669ec-rootfs.mount: Deactivated successfully. Aug 13 00:03:35.760642 kubelet[2655]: I0813 00:03:35.755611 2655 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Aug 13 00:03:35.827276 kubelet[2655]: I0813 00:03:35.826883 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/ec1ef1a0-b4ac-42a1-9532-047e283102fa-goldmane-key-pair\") pod \"goldmane-58fd7646b9-kcl6d\" (UID: \"ec1ef1a0-b4ac-42a1-9532-047e283102fa\") " pod="calico-system/goldmane-58fd7646b9-kcl6d" Aug 13 00:03:35.827276 kubelet[2655]: I0813 00:03:35.826959 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec1ef1a0-b4ac-42a1-9532-047e283102fa-config\") pod \"goldmane-58fd7646b9-kcl6d\" (UID: \"ec1ef1a0-b4ac-42a1-9532-047e283102fa\") " pod="calico-system/goldmane-58fd7646b9-kcl6d" Aug 13 00:03:35.827276 kubelet[2655]: I0813 00:03:35.827004 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec1ef1a0-b4ac-42a1-9532-047e283102fa-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-kcl6d\" (UID: \"ec1ef1a0-b4ac-42a1-9532-047e283102fa\") " pod="calico-system/goldmane-58fd7646b9-kcl6d" Aug 13 00:03:35.827276 kubelet[2655]: I0813 00:03:35.827029 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pncmw\" (UniqueName: \"kubernetes.io/projected/ec1ef1a0-b4ac-42a1-9532-047e283102fa-kube-api-access-pncmw\") pod \"goldmane-58fd7646b9-kcl6d\" (UID: \"ec1ef1a0-b4ac-42a1-9532-047e283102fa\") " pod="calico-system/goldmane-58fd7646b9-kcl6d" Aug 13 00:03:35.827276 kubelet[2655]: I0813 00:03:35.827049 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/527d02b8-3c8b-4d2b-ac23-d425550b3599-config-volume\") pod \"coredns-7c65d6cfc9-gfdrg\" (UID: \"527d02b8-3c8b-4d2b-ac23-d425550b3599\") " pod="kube-system/coredns-7c65d6cfc9-gfdrg" Aug 13 00:03:35.827627 kubelet[2655]: I0813 00:03:35.827092 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czlqf\" (UniqueName: \"kubernetes.io/projected/527d02b8-3c8b-4d2b-ac23-d425550b3599-kube-api-access-czlqf\") pod \"coredns-7c65d6cfc9-gfdrg\" (UID: \"527d02b8-3c8b-4d2b-ac23-d425550b3599\") " pod="kube-system/coredns-7c65d6cfc9-gfdrg" Aug 13 00:03:35.928195 kubelet[2655]: I0813 00:03:35.928149 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/42ea4925-5542-4fd9-b692-9049837d3ae0-whisker-backend-key-pair\") pod \"whisker-88f556f98-v2t2q\" (UID: \"42ea4925-5542-4fd9-b692-9049837d3ae0\") " pod="calico-system/whisker-88f556f98-v2t2q" Aug 13 00:03:35.928195 kubelet[2655]: I0813 00:03:35.928199 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/28ea706e-5d40-433e-9ee6-62a5f96b1be1-calico-apiserver-certs\") pod \"calico-apiserver-5664c8f75b-pc5h2\" (UID: \"28ea706e-5d40-433e-9ee6-62a5f96b1be1\") " pod="calico-apiserver/calico-apiserver-5664c8f75b-pc5h2" Aug 13 00:03:35.928399 kubelet[2655]: I0813 00:03:35.928217 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z8t5\" (UniqueName: \"kubernetes.io/projected/28ea706e-5d40-433e-9ee6-62a5f96b1be1-kube-api-access-5z8t5\") pod \"calico-apiserver-5664c8f75b-pc5h2\" (UID: \"28ea706e-5d40-433e-9ee6-62a5f96b1be1\") " pod="calico-apiserver/calico-apiserver-5664c8f75b-pc5h2" Aug 13 00:03:35.928399 kubelet[2655]: I0813 00:03:35.928251 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/42ea4925-5542-4fd9-b692-9049837d3ae0-whisker-ca-bundle\") pod \"whisker-88f556f98-v2t2q\" (UID: \"42ea4925-5542-4fd9-b692-9049837d3ae0\") " pod="calico-system/whisker-88f556f98-v2t2q" Aug 13 00:03:35.928399 kubelet[2655]: I0813 00:03:35.928282 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/77ef7b45-3d42-4c7d-b3d2-5b91108fefb3-calico-apiserver-certs\") pod \"calico-apiserver-5f97f8f466-rqg7k\" (UID: \"77ef7b45-3d42-4c7d-b3d2-5b91108fefb3\") " pod="calico-apiserver/calico-apiserver-5f97f8f466-rqg7k" Aug 13 00:03:35.928399 kubelet[2655]: I0813 00:03:35.928312 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d5bf07f-d99d-4674-bcbd-ba9b13b0fbd9-tigera-ca-bundle\") pod \"calico-kube-controllers-576869b9dc-vzbtv\" (UID: \"1d5bf07f-d99d-4674-bcbd-ba9b13b0fbd9\") " pod="calico-system/calico-kube-controllers-576869b9dc-vzbtv" Aug 13 00:03:35.928399 kubelet[2655]: I0813 00:03:35.928330 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd9sc\" (UniqueName: \"kubernetes.io/projected/3816201b-b93a-4ec2-a67a-d16b5eed4f52-kube-api-access-zd9sc\") pod \"coredns-7c65d6cfc9-ktbwq\" (UID: \"3816201b-b93a-4ec2-a67a-d16b5eed4f52\") " pod="kube-system/coredns-7c65d6cfc9-ktbwq" Aug 13 00:03:35.928521 kubelet[2655]: I0813 00:03:35.928369 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3816201b-b93a-4ec2-a67a-d16b5eed4f52-config-volume\") pod \"coredns-7c65d6cfc9-ktbwq\" (UID: \"3816201b-b93a-4ec2-a67a-d16b5eed4f52\") " pod="kube-system/coredns-7c65d6cfc9-ktbwq" Aug 13 00:03:35.928521 kubelet[2655]: I0813 00:03:35.928389 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/87091bc5-d911-4547-82d0-decf534f50dd-calico-apiserver-certs\") pod \"calico-apiserver-5f97f8f466-r5z5g\" (UID: \"87091bc5-d911-4547-82d0-decf534f50dd\") " pod="calico-apiserver/calico-apiserver-5f97f8f466-r5z5g" Aug 13 00:03:35.928521 kubelet[2655]: I0813 00:03:35.928407 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkvnd\" (UniqueName: \"kubernetes.io/projected/77ef7b45-3d42-4c7d-b3d2-5b91108fefb3-kube-api-access-dkvnd\") pod \"calico-apiserver-5f97f8f466-rqg7k\" (UID: \"77ef7b45-3d42-4c7d-b3d2-5b91108fefb3\") " pod="calico-apiserver/calico-apiserver-5f97f8f466-rqg7k" Aug 13 00:03:35.928521 kubelet[2655]: I0813 00:03:35.928436 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8v8g\" (UniqueName: \"kubernetes.io/projected/1d5bf07f-d99d-4674-bcbd-ba9b13b0fbd9-kube-api-access-p8v8g\") pod \"calico-kube-controllers-576869b9dc-vzbtv\" (UID: \"1d5bf07f-d99d-4674-bcbd-ba9b13b0fbd9\") " pod="calico-system/calico-kube-controllers-576869b9dc-vzbtv" Aug 13 00:03:35.928521 kubelet[2655]: I0813 00:03:35.928457 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pddd9\" (UniqueName: \"kubernetes.io/projected/87091bc5-d911-4547-82d0-decf534f50dd-kube-api-access-pddd9\") pod \"calico-apiserver-5f97f8f466-r5z5g\" (UID: \"87091bc5-d911-4547-82d0-decf534f50dd\") " pod="calico-apiserver/calico-apiserver-5f97f8f466-r5z5g" Aug 13 00:03:35.928639 kubelet[2655]: I0813 00:03:35.928473 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65g6x\" (UniqueName: \"kubernetes.io/projected/42ea4925-5542-4fd9-b692-9049837d3ae0-kube-api-access-65g6x\") pod \"whisker-88f556f98-v2t2q\" (UID: \"42ea4925-5542-4fd9-b692-9049837d3ae0\") " pod="calico-system/whisker-88f556f98-v2t2q" Aug 13 00:03:36.583569 env[1583]: time="2025-08-13T00:03:36.583169310Z" level=info msg="shim disconnected" id=ab62355563d2bc48baa05109db1293933bf844150478a062f52e3c93dfa669ec Aug 13 00:03:36.583569 env[1583]: time="2025-08-13T00:03:36.583219030Z" level=warning msg="cleaning up after shim disconnected" id=ab62355563d2bc48baa05109db1293933bf844150478a062f52e3c93dfa669ec namespace=k8s.io Aug 13 00:03:36.583569 env[1583]: time="2025-08-13T00:03:36.583233950Z" level=info msg="cleaning up dead shim" Aug 13 00:03:36.583569 env[1583]: time="2025-08-13T00:03:36.583386950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dc7fc,Uid:8e788131-5ffd-4005-9137-e23c17af1da5,Namespace:calico-system,Attempt:0,}" Aug 13 00:03:36.584033 env[1583]: time="2025-08-13T00:03:36.583753590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gfdrg,Uid:527d02b8-3c8b-4d2b-ac23-d425550b3599,Namespace:kube-system,Attempt:0,}" Aug 13 00:03:36.584231 env[1583]: time="2025-08-13T00:03:36.584101590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-kcl6d,Uid:ec1ef1a0-b4ac-42a1-9532-047e283102fa,Namespace:calico-system,Attempt:0,}" Aug 13 00:03:36.594212 env[1583]: time="2025-08-13T00:03:36.594171552Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:03:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3387 runtime=io.containerd.runc.v2\n" Aug 13 00:03:36.644770 env[1583]: time="2025-08-13T00:03:36.644732803Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 00:03:36.725342 env[1583]: time="2025-08-13T00:03:36.725298339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-576869b9dc-vzbtv,Uid:1d5bf07f-d99d-4674-bcbd-ba9b13b0fbd9,Namespace:calico-system,Attempt:0,}" Aug 13 00:03:36.732397 env[1583]: time="2025-08-13T00:03:36.732331460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-ktbwq,Uid:3816201b-b93a-4ec2-a67a-d16b5eed4f52,Namespace:kube-system,Attempt:0,}" Aug 13 00:03:36.742551 env[1583]: time="2025-08-13T00:03:36.742492983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5664c8f75b-pc5h2,Uid:28ea706e-5d40-433e-9ee6-62a5f96b1be1,Namespace:calico-apiserver,Attempt:0,}" Aug 13 00:03:36.745457 env[1583]: time="2025-08-13T00:03:36.745415783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f97f8f466-rqg7k,Uid:77ef7b45-3d42-4c7d-b3d2-5b91108fefb3,Namespace:calico-apiserver,Attempt:0,}" Aug 13 00:03:36.745976 env[1583]: time="2025-08-13T00:03:36.745950223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f97f8f466-r5z5g,Uid:87091bc5-d911-4547-82d0-decf534f50dd,Namespace:calico-apiserver,Attempt:0,}" Aug 13 00:03:36.774442 env[1583]: time="2025-08-13T00:03:36.774399469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-88f556f98-v2t2q,Uid:42ea4925-5542-4fd9-b692-9049837d3ae0,Namespace:calico-system,Attempt:0,}" Aug 13 00:03:36.786558 env[1583]: time="2025-08-13T00:03:36.786478031Z" level=error msg="Failed to destroy network for sandbox \"838fffdc175cdb2fd82e62d47433c5e5f3e885d3d9b33b5ebdd7fec9c479d8f0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:03:36.789067 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-838fffdc175cdb2fd82e62d47433c5e5f3e885d3d9b33b5ebdd7fec9c479d8f0-shm.mount: Deactivated successfully. Aug 13 00:03:36.790780 env[1583]: time="2025-08-13T00:03:36.790729592Z" level=error msg="Failed to destroy network for sandbox \"8e4d5e54bd38f42e4141c2ebfbd61aaae12d6dfe133d0116ce035461054e071d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:03:36.793185 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8e4d5e54bd38f42e4141c2ebfbd61aaae12d6dfe133d0116ce035461054e071d-shm.mount: Deactivated successfully. Aug 13 00:03:36.793576 env[1583]: time="2025-08-13T00:03:36.793528953Z" level=error msg="encountered an error cleaning up failed sandbox \"838fffdc175cdb2fd82e62d47433c5e5f3e885d3d9b33b5ebdd7fec9c479d8f0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:03:36.793988 env[1583]: time="2025-08-13T00:03:36.793952233Z" level=error msg="encountered an error cleaning up failed sandbox \"8e4d5e54bd38f42e4141c2ebfbd61aaae12d6dfe133d0116ce035461054e071d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:03:36.794158 env[1583]: time="2025-08-13T00:03:36.794116873Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-kcl6d,Uid:ec1ef1a0-b4ac-42a1-9532-047e283102fa,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8e4d5e54bd38f42e4141c2ebfbd61aaae12d6dfe133d0116ce035461054e071d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:03:36.794361 env[1583]: time="2025-08-13T00:03:36.794087673Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dc7fc,Uid:8e788131-5ffd-4005-9137-e23c17af1da5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"838fffdc175cdb2fd82e62d47433c5e5f3e885d3d9b33b5ebdd7fec9c479d8f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:03:36.794714 kubelet[2655]: E0813 00:03:36.794604 2655 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e4d5e54bd38f42e4141c2ebfbd61aaae12d6dfe133d0116ce035461054e071d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:03:36.795231 kubelet[2655]: E0813 00:03:36.794690 2655 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e4d5e54bd38f42e4141c2ebfbd61aaae12d6dfe133d0116ce035461054e071d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-kcl6d" Aug 13 00:03:36.795231 kubelet[2655]: E0813 00:03:36.795051 2655 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e4d5e54bd38f42e4141c2ebfbd61aaae12d6dfe133d0116ce035461054e071d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-kcl6d" Aug 13 00:03:36.795231 kubelet[2655]: E0813 00:03:36.795106 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-kcl6d_calico-system(ec1ef1a0-b4ac-42a1-9532-047e283102fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-kcl6d_calico-system(ec1ef1a0-b4ac-42a1-9532-047e283102fa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8e4d5e54bd38f42e4141c2ebfbd61aaae12d6dfe133d0116ce035461054e071d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-kcl6d" podUID="ec1ef1a0-b4ac-42a1-9532-047e283102fa" Aug 13 00:03:36.796957 kubelet[2655]: E0813 00:03:36.796805 2655 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"838fffdc175cdb2fd82e62d47433c5e5f3e885d3d9b33b5ebdd7fec9c479d8f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:03:36.796957 kubelet[2655]: E0813 00:03:36.796848 2655 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"838fffdc175cdb2fd82e62d47433c5e5f3e885d3d9b33b5ebdd7fec9c479d8f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dc7fc" Aug 13 00:03:36.796957 kubelet[2655]: E0813 00:03:36.796866 2655 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"838fffdc175cdb2fd82e62d47433c5e5f3e885d3d9b33b5ebdd7fec9c479d8f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dc7fc" Aug 13 00:03:36.797146 kubelet[2655]: E0813 00:03:36.796900 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dc7fc_calico-system(8e788131-5ffd-4005-9137-e23c17af1da5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dc7fc_calico-system(8e788131-5ffd-4005-9137-e23c17af1da5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"838fffdc175cdb2fd82e62d47433c5e5f3e885d3d9b33b5ebdd7fec9c479d8f0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dc7fc" podUID="8e788131-5ffd-4005-9137-e23c17af1da5" Aug 13 00:03:36.810286 env[1583]: time="2025-08-13T00:03:36.810233916Z" level=error msg="Failed to destroy network for sandbox \"1d4d0802d55401c4f07699d1e971d25cf3e5fc33dddc2477fff34bc92283e502\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:03:36.811306 env[1583]: time="2025-08-13T00:03:36.811265756Z" level=error msg="encountered an error cleaning up failed sandbox \"1d4d0802d55401c4f07699d1e971d25cf3e5fc33dddc2477fff34bc92283e502\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:03:36.816993 env[1583]: time="2025-08-13T00:03:36.816941358Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gfdrg,Uid:527d02b8-3c8b-4d2b-ac23-d425550b3599,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1d4d0802d55401c4f07699d1e971d25cf3e5fc33dddc2477fff34bc92283e502\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:03:36.817620 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1d4d0802d55401c4f07699d1e971d25cf3e5fc33dddc2477fff34bc92283e502-shm.mount: Deactivated successfully. Aug 13 00:03:36.820860 kubelet[2655]: E0813 00:03:36.818928 2655 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d4d0802d55401c4f07699d1e971d25cf3e5fc33dddc2477fff34bc92283e502\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:03:36.820860 kubelet[2655]: E0813 00:03:36.818999 2655 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d4d0802d55401c4f07699d1e971d25cf3e5fc33dddc2477fff34bc92283e502\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-gfdrg" Aug 13 00:03:36.820860 kubelet[2655]: E0813 00:03:36.819020 2655 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d4d0802d55401c4f07699d1e971d25cf3e5fc33dddc2477fff34bc92283e502\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-gfdrg" Aug 13 00:03:36.820999 kubelet[2655]: E0813 00:03:36.819072 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-gfdrg_kube-system(527d02b8-3c8b-4d2b-ac23-d425550b3599)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-gfdrg_kube-system(527d02b8-3c8b-4d2b-ac23-d425550b3599)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1d4d0802d55401c4f07699d1e971d25cf3e5fc33dddc2477fff34bc92283e502\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-gfdrg" podUID="527d02b8-3c8b-4d2b-ac23-d425550b3599" Aug 13 00:03:36.954198 env[1583]: time="2025-08-13T00:03:36.954126545Z" level=error msg="Failed to destroy network for sandbox \"939ded7457fd75b953830e6e1bc1c61feaa141835f32f937f88b682ada815021\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:03:36.954720 env[1583]: time="2025-08-13T00:03:36.954631226Z" level=error msg="encountered an error cleaning up failed sandbox \"939ded7457fd75b953830e6e1bc1c61feaa141835f32f937f88b682ada815021\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:03:36.954883 env[1583]: time="2025-08-13T00:03:36.954852066Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-576869b9dc-vzbtv,Uid:1d5bf07f-d99d-4674-bcbd-ba9b13b0fbd9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"939ded7457fd75b953830e6e1bc1c61feaa141835f32f937f88b682ada815021\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:03:36.956433 kubelet[2655]: E0813 00:03:36.955207 2655 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"939ded7457fd75b953830e6e1bc1c61feaa141835f32f937f88b682ada815021\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:03:36.956433 kubelet[2655]: E0813 00:03:36.955277 2655 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"939ded7457fd75b953830e6e1bc1c61feaa141835f32f937f88b682ada815021\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-576869b9dc-vzbtv" Aug 13 00:03:36.956433 kubelet[2655]: E0813 00:03:36.955299 2655 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"939ded7457fd75b953830e6e1bc1c61feaa141835f32f937f88b682ada815021\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-576869b9dc-vzbtv" Aug 13 00:03:36.957282 kubelet[2655]: E0813 00:03:36.955358 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-576869b9dc-vzbtv_calico-system(1d5bf07f-d99d-4674-bcbd-ba9b13b0fbd9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-576869b9dc-vzbtv_calico-system(1d5bf07f-d99d-4674-bcbd-ba9b13b0fbd9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"939ded7457fd75b953830e6e1bc1c61feaa141835f32f937f88b682ada815021\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-576869b9dc-vzbtv" podUID="1d5bf07f-d99d-4674-bcbd-ba9b13b0fbd9" Aug 13 00:03:37.006474 env[1583]: time="2025-08-13T00:03:37.006420276Z" level=error msg="Failed to destroy network for sandbox \"28c6889dec5f17c11b47d3ed693d9afe879801706eb6a659afadea9406346a00\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:03:37.007116 env[1583]: time="2025-08-13T00:03:37.007079196Z" level=error msg="encountered an error cleaning up failed sandbox \"28c6889dec5f17c11b47d3ed693d9afe879801706eb6a659afadea9406346a00\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:03:37.007282 env[1583]: time="2025-08-13T00:03:37.007252396Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-ktbwq,Uid:3816201b-b93a-4ec2-a67a-d16b5eed4f52,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"28c6889dec5f17c11b47d3ed693d9afe879801706eb6a659afadea9406346a00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:03:37.009087 kubelet[2655]: E0813 00:03:37.007945 2655 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28c6889dec5f17c11b47d3ed693d9afe879801706eb6a659afadea9406346a00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:03:37.009087 kubelet[2655]: E0813 00:03:37.008022 2655 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28c6889dec5f17c11b47d3ed693d9afe879801706eb6a659afadea9406346a00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-ktbwq" Aug 13 00:03:37.009087 kubelet[2655]: E0813 00:03:37.008048 2655 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28c6889dec5f17c11b47d3ed693d9afe879801706eb6a659afadea9406346a00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-ktbwq" Aug 13 00:03:37.009962 kubelet[2655]: E0813 00:03:37.008107 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-ktbwq_kube-system(3816201b-b93a-4ec2-a67a-d16b5eed4f52)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-ktbwq_kube-system(3816201b-b93a-4ec2-a67a-d16b5eed4f52)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"28c6889dec5f17c11b47d3ed693d9afe879801706eb6a659afadea9406346a00\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-ktbwq" podUID="3816201b-b93a-4ec2-a67a-d16b5eed4f52" Aug 13 00:03:37.026966 env[1583]: time="2025-08-13T00:03:37.026897600Z" level=error msg="Failed to destroy network for sandbox \"edd091152171f7b069755787a194632b052918e3d75b084cdb4e231d1ed5c062\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:03:37.027309 env[1583]: time="2025-08-13T00:03:37.027260920Z" level=error msg="encountered an error cleaning up failed sandbox \"edd091152171f7b069755787a194632b052918e3d75b084cdb4e231d1ed5c062\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:03:37.027362 env[1583]: time="2025-08-13T00:03:37.027311960Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5664c8f75b-pc5h2,Uid:28ea706e-5d40-433e-9ee6-62a5f96b1be1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"edd091152171f7b069755787a194632b052918e3d75b084cdb4e231d1ed5c062\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:03:37.028086 kubelet[2655]: E0813 00:03:37.027681 2655 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"edd091152171f7b069755787a194632b052918e3d75b084cdb4e231d1ed5c062\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:03:37.028086 kubelet[2655]: E0813 00:03:37.027745 2655 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"edd091152171f7b069755787a194632b052918e3d75b084cdb4e231d1ed5c062\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5664c8f75b-pc5h2" Aug 13 00:03:37.028086 kubelet[2655]: E0813 00:03:37.027777 2655 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"edd091152171f7b069755787a194632b052918e3d75b084cdb4e231d1ed5c062\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5664c8f75b-pc5h2" Aug 13 00:03:37.028273 kubelet[2655]: E0813 00:03:37.028048 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5664c8f75b-pc5h2_calico-apiserver(28ea706e-5d40-433e-9ee6-62a5f96b1be1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5664c8f75b-pc5h2_calico-apiserver(28ea706e-5d40-433e-9ee6-62a5f96b1be1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"edd091152171f7b069755787a194632b052918e3d75b084cdb4e231d1ed5c062\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5664c8f75b-pc5h2" podUID="28ea706e-5d40-433e-9ee6-62a5f96b1be1" Aug 13 00:03:37.047545 env[1583]: time="2025-08-13T00:03:37.047456164Z" level=error msg="Failed to destroy network for sandbox \"fcbc5b445f8a3fc2a088e7b5456f1dd8ee26a08e7c21eb05f71e2143986882c2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:03:37.048123 env[1583]: time="2025-08-13T00:03:37.048088324Z" level=error msg="encountered an error cleaning up failed sandbox \"fcbc5b445f8a3fc2a088e7b5456f1dd8ee26a08e7c21eb05f71e2143986882c2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:03:37.048256 env[1583]: time="2025-08-13T00:03:37.048228964Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-88f556f98-v2t2q,Uid:42ea4925-5542-4fd9-b692-9049837d3ae0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fcbc5b445f8a3fc2a088e7b5456f1dd8ee26a08e7c21eb05f71e2143986882c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:03:37.049021 kubelet[2655]: E0813 00:03:37.048620 2655 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcbc5b445f8a3fc2a088e7b5456f1dd8ee26a08e7c21eb05f71e2143986882c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:03:37.049021 kubelet[2655]: E0813 00:03:37.048701 2655 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcbc5b445f8a3fc2a088e7b5456f1dd8ee26a08e7c21eb05f71e2143986882c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-88f556f98-v2t2q" Aug 13 00:03:37.049021 kubelet[2655]: E0813 00:03:37.048719 2655 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcbc5b445f8a3fc2a088e7b5456f1dd8ee26a08e7c21eb05f71e2143986882c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-88f556f98-v2t2q" Aug 13 00:03:37.049213 kubelet[2655]: E0813 00:03:37.048782 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-88f556f98-v2t2q_calico-system(42ea4925-5542-4fd9-b692-9049837d3ae0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-88f556f98-v2t2q_calico-system(42ea4925-5542-4fd9-b692-9049837d3ae0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fcbc5b445f8a3fc2a088e7b5456f1dd8ee26a08e7c21eb05f71e2143986882c2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-88f556f98-v2t2q" podUID="42ea4925-5542-4fd9-b692-9049837d3ae0" Aug 13 00:03:37.051698 env[1583]: time="2025-08-13T00:03:37.051641445Z" level=error msg="Failed to destroy network for sandbox \"290797fbee0b26f2bc94d36961517bd53f4c376b37486958785c84c5762ffa70\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:03:37.053065 env[1583]: time="2025-08-13T00:03:37.053027845Z" level=error msg="encountered an error cleaning up failed sandbox \"290797fbee0b26f2bc94d36961517bd53f4c376b37486958785c84c5762ffa70\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:03:37.053209 env[1583]: time="2025-08-13T00:03:37.053182005Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f97f8f466-r5z5g,Uid:87091bc5-d911-4547-82d0-decf534f50dd,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"290797fbee0b26f2bc94d36961517bd53f4c376b37486958785c84c5762ffa70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:03:37.056212 kubelet[2655]: E0813 00:03:37.054686 2655 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"290797fbee0b26f2bc94d36961517bd53f4c376b37486958785c84c5762ffa70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:03:37.056212 kubelet[2655]: E0813 00:03:37.054735 2655 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"290797fbee0b26f2bc94d36961517bd53f4c376b37486958785c84c5762ffa70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f97f8f466-r5z5g" Aug 13 00:03:37.056212 kubelet[2655]: E0813 00:03:37.054770 2655 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"290797fbee0b26f2bc94d36961517bd53f4c376b37486958785c84c5762ffa70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f97f8f466-r5z5g" Aug 13 00:03:37.056381 kubelet[2655]: E0813 00:03:37.054814 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5f97f8f466-r5z5g_calico-apiserver(87091bc5-d911-4547-82d0-decf534f50dd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5f97f8f466-r5z5g_calico-apiserver(87091bc5-d911-4547-82d0-decf534f50dd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"290797fbee0b26f2bc94d36961517bd53f4c376b37486958785c84c5762ffa70\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f97f8f466-r5z5g" podUID="87091bc5-d911-4547-82d0-decf534f50dd" Aug 13 00:03:37.062595 env[1583]: time="2025-08-13T00:03:37.062546767Z" level=error msg="Failed to destroy network for sandbox \"4e6c7c3adfa6c38531ec378d3b31a0f6a045dbba261f9b907a45ea913661b797\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:03:37.063067 env[1583]: time="2025-08-13T00:03:37.063030407Z" level=error msg="encountered an error cleaning up failed sandbox \"4e6c7c3adfa6c38531ec378d3b31a0f6a045dbba261f9b907a45ea913661b797\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:03:37.063227 env[1583]: time="2025-08-13T00:03:37.063196207Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f97f8f466-rqg7k,Uid:77ef7b45-3d42-4c7d-b3d2-5b91108fefb3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4e6c7c3adfa6c38531ec378d3b31a0f6a045dbba261f9b907a45ea913661b797\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:03:37.064072 kubelet[2655]: E0813 00:03:37.063526 2655 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e6c7c3adfa6c38531ec378d3b31a0f6a045dbba261f9b907a45ea913661b797\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:03:37.064072 kubelet[2655]: E0813 00:03:37.063575 2655 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e6c7c3adfa6c38531ec378d3b31a0f6a045dbba261f9b907a45ea913661b797\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f97f8f466-rqg7k" Aug 13 00:03:37.064072 kubelet[2655]: E0813 00:03:37.063611 2655 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e6c7c3adfa6c38531ec378d3b31a0f6a045dbba261f9b907a45ea913661b797\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f97f8f466-rqg7k" Aug 13 00:03:37.064230 kubelet[2655]: E0813 00:03:37.064002 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5f97f8f466-rqg7k_calico-apiserver(77ef7b45-3d42-4c7d-b3d2-5b91108fefb3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5f97f8f466-rqg7k_calico-apiserver(77ef7b45-3d42-4c7d-b3d2-5b91108fefb3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4e6c7c3adfa6c38531ec378d3b31a0f6a045dbba261f9b907a45ea913661b797\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f97f8f466-rqg7k" podUID="77ef7b45-3d42-4c7d-b3d2-5b91108fefb3" Aug 13 00:03:37.651702 kubelet[2655]: I0813 00:03:37.651624 2655 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="290797fbee0b26f2bc94d36961517bd53f4c376b37486958785c84c5762ffa70" Aug 13 00:03:37.653272 env[1583]: time="2025-08-13T00:03:37.653197285Z" level=info msg="StopPodSandbox for \"290797fbee0b26f2bc94d36961517bd53f4c376b37486958785c84c5762ffa70\"" Aug 13 00:03:37.670919 kubelet[2655]: I0813 00:03:37.670852 2655 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="838fffdc175cdb2fd82e62d47433c5e5f3e885d3d9b33b5ebdd7fec9c479d8f0" Aug 13 00:03:37.671914 env[1583]: time="2025-08-13T00:03:37.671855089Z" level=info msg="StopPodSandbox for \"838fffdc175cdb2fd82e62d47433c5e5f3e885d3d9b33b5ebdd7fec9c479d8f0\"" Aug 13 00:03:37.676692 kubelet[2655]: I0813 00:03:37.676316 2655 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="edd091152171f7b069755787a194632b052918e3d75b084cdb4e231d1ed5c062" Aug 13 00:03:37.678594 env[1583]: time="2025-08-13T00:03:37.678553410Z" level=info msg="StopPodSandbox for \"edd091152171f7b069755787a194632b052918e3d75b084cdb4e231d1ed5c062\"" Aug 13 00:03:37.681530 kubelet[2655]: I0813 00:03:37.681467 2655 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d4d0802d55401c4f07699d1e971d25cf3e5fc33dddc2477fff34bc92283e502" Aug 13 00:03:37.682242 env[1583]: time="2025-08-13T00:03:37.682212411Z" level=info msg="StopPodSandbox for \"1d4d0802d55401c4f07699d1e971d25cf3e5fc33dddc2477fff34bc92283e502\"" Aug 13 00:03:37.683094 kubelet[2655]: I0813 00:03:37.682781 2655 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28c6889dec5f17c11b47d3ed693d9afe879801706eb6a659afadea9406346a00" Aug 13 00:03:37.685222 env[1583]: time="2025-08-13T00:03:37.685194891Z" level=info msg="StopPodSandbox for \"28c6889dec5f17c11b47d3ed693d9afe879801706eb6a659afadea9406346a00\"" Aug 13 00:03:37.692481 kubelet[2655]: I0813 00:03:37.691881 2655 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="939ded7457fd75b953830e6e1bc1c61feaa141835f32f937f88b682ada815021" Aug 13 00:03:37.692798 env[1583]: time="2025-08-13T00:03:37.692771973Z" level=info msg="StopPodSandbox for \"939ded7457fd75b953830e6e1bc1c61feaa141835f32f937f88b682ada815021\"" Aug 13 00:03:37.694577 kubelet[2655]: I0813 00:03:37.694162 2655 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e4d5e54bd38f42e4141c2ebfbd61aaae12d6dfe133d0116ce035461054e071d" Aug 13 00:03:37.694833 env[1583]: time="2025-08-13T00:03:37.694808653Z" level=info msg="StopPodSandbox for \"8e4d5e54bd38f42e4141c2ebfbd61aaae12d6dfe133d0116ce035461054e071d\"" Aug 13 00:03:37.696988 kubelet[2655]: I0813 00:03:37.696924 2655 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fcbc5b445f8a3fc2a088e7b5456f1dd8ee26a08e7c21eb05f71e2143986882c2" Aug 13 00:03:37.697707 env[1583]: time="2025-08-13T00:03:37.697653254Z" level=info msg="StopPodSandbox for \"fcbc5b445f8a3fc2a088e7b5456f1dd8ee26a08e7c21eb05f71e2143986882c2\"" Aug 13 00:03:37.699362 kubelet[2655]: I0813 00:03:37.698930 2655 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e6c7c3adfa6c38531ec378d3b31a0f6a045dbba261f9b907a45ea913661b797" Aug 13 00:03:37.699492 env[1583]: time="2025-08-13T00:03:37.699459414Z" level=info msg="StopPodSandbox for \"4e6c7c3adfa6c38531ec378d3b31a0f6a045dbba261f9b907a45ea913661b797\"" Aug 13 00:03:37.726715 env[1583]: time="2025-08-13T00:03:37.726623220Z" level=error msg="StopPodSandbox for \"290797fbee0b26f2bc94d36961517bd53f4c376b37486958785c84c5762ffa70\" failed" error="failed to destroy network for sandbox \"290797fbee0b26f2bc94d36961517bd53f4c376b37486958785c84c5762ffa70\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:03:37.727490 kubelet[2655]: E0813 00:03:37.727198 2655 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"290797fbee0b26f2bc94d36961517bd53f4c376b37486958785c84c5762ffa70\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="290797fbee0b26f2bc94d36961517bd53f4c376b37486958785c84c5762ffa70" Aug 13 00:03:37.727490 kubelet[2655]: E0813 00:03:37.727309 2655 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"290797fbee0b26f2bc94d36961517bd53f4c376b37486958785c84c5762ffa70"} Aug 13 00:03:37.727490 kubelet[2655]: E0813 00:03:37.727396 2655 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"87091bc5-d911-4547-82d0-decf534f50dd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"290797fbee0b26f2bc94d36961517bd53f4c376b37486958785c84c5762ffa70\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:03:37.727490 kubelet[2655]: E0813 00:03:37.727418 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"87091bc5-d911-4547-82d0-decf534f50dd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"290797fbee0b26f2bc94d36961517bd53f4c376b37486958785c84c5762ffa70\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f97f8f466-r5z5g" podUID="87091bc5-d911-4547-82d0-decf534f50dd" Aug 13 00:03:37.749640 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-939ded7457fd75b953830e6e1bc1c61feaa141835f32f937f88b682ada815021-shm.mount: Deactivated successfully. Aug 13 00:03:37.777425 env[1583]: time="2025-08-13T00:03:37.777280110Z" level=error msg="StopPodSandbox for \"838fffdc175cdb2fd82e62d47433c5e5f3e885d3d9b33b5ebdd7fec9c479d8f0\" failed" error="failed to destroy network for sandbox \"838fffdc175cdb2fd82e62d47433c5e5f3e885d3d9b33b5ebdd7fec9c479d8f0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:03:37.783207 kubelet[2655]: E0813 00:03:37.782996 2655 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"838fffdc175cdb2fd82e62d47433c5e5f3e885d3d9b33b5ebdd7fec9c479d8f0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="838fffdc175cdb2fd82e62d47433c5e5f3e885d3d9b33b5ebdd7fec9c479d8f0" Aug 13 00:03:37.783207 kubelet[2655]: E0813 00:03:37.783054 2655 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"838fffdc175cdb2fd82e62d47433c5e5f3e885d3d9b33b5ebdd7fec9c479d8f0"} Aug 13 00:03:37.783207 kubelet[2655]: E0813 00:03:37.783088 2655 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8e788131-5ffd-4005-9137-e23c17af1da5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"838fffdc175cdb2fd82e62d47433c5e5f3e885d3d9b33b5ebdd7fec9c479d8f0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:03:37.783207 kubelet[2655]: E0813 00:03:37.783126 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8e788131-5ffd-4005-9137-e23c17af1da5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"838fffdc175cdb2fd82e62d47433c5e5f3e885d3d9b33b5ebdd7fec9c479d8f0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dc7fc" podUID="8e788131-5ffd-4005-9137-e23c17af1da5" Aug 13 00:03:37.811187 env[1583]: time="2025-08-13T00:03:37.811131356Z" level=error msg="StopPodSandbox for \"edd091152171f7b069755787a194632b052918e3d75b084cdb4e231d1ed5c062\" failed" error="failed to destroy network for sandbox \"edd091152171f7b069755787a194632b052918e3d75b084cdb4e231d1ed5c062\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:03:37.811616 kubelet[2655]: E0813 00:03:37.811563 2655 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"edd091152171f7b069755787a194632b052918e3d75b084cdb4e231d1ed5c062\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="edd091152171f7b069755787a194632b052918e3d75b084cdb4e231d1ed5c062" Aug 13 00:03:37.811940 kubelet[2655]: E0813 00:03:37.811627 2655 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"edd091152171f7b069755787a194632b052918e3d75b084cdb4e231d1ed5c062"} Aug 13 00:03:37.811940 kubelet[2655]: E0813 00:03:37.811685 2655 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"28ea706e-5d40-433e-9ee6-62a5f96b1be1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"edd091152171f7b069755787a194632b052918e3d75b084cdb4e231d1ed5c062\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:03:37.811940 kubelet[2655]: E0813 00:03:37.811723 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"28ea706e-5d40-433e-9ee6-62a5f96b1be1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"edd091152171f7b069755787a194632b052918e3d75b084cdb4e231d1ed5c062\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5664c8f75b-pc5h2" podUID="28ea706e-5d40-433e-9ee6-62a5f96b1be1" Aug 13 00:03:37.882979 env[1583]: time="2025-08-13T00:03:37.882820691Z" level=error msg="StopPodSandbox for \"1d4d0802d55401c4f07699d1e971d25cf3e5fc33dddc2477fff34bc92283e502\" failed" error="failed to destroy network for sandbox \"1d4d0802d55401c4f07699d1e971d25cf3e5fc33dddc2477fff34bc92283e502\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:03:37.883164 kubelet[2655]: E0813 00:03:37.883101 2655 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1d4d0802d55401c4f07699d1e971d25cf3e5fc33dddc2477fff34bc92283e502\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1d4d0802d55401c4f07699d1e971d25cf3e5fc33dddc2477fff34bc92283e502" Aug 13 00:03:37.883164 kubelet[2655]: E0813 00:03:37.883150 2655 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1d4d0802d55401c4f07699d1e971d25cf3e5fc33dddc2477fff34bc92283e502"} Aug 13 00:03:37.883242 kubelet[2655]: E0813 00:03:37.883185 2655 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"527d02b8-3c8b-4d2b-ac23-d425550b3599\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1d4d0802d55401c4f07699d1e971d25cf3e5fc33dddc2477fff34bc92283e502\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:03:37.883242 kubelet[2655]: E0813 00:03:37.883206 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"527d02b8-3c8b-4d2b-ac23-d425550b3599\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1d4d0802d55401c4f07699d1e971d25cf3e5fc33dddc2477fff34bc92283e502\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-gfdrg" podUID="527d02b8-3c8b-4d2b-ac23-d425550b3599" Aug 13 00:03:37.886320 env[1583]: time="2025-08-13T00:03:37.886268331Z" level=error msg="StopPodSandbox for \"28c6889dec5f17c11b47d3ed693d9afe879801706eb6a659afadea9406346a00\" failed" error="failed to destroy network for sandbox \"28c6889dec5f17c11b47d3ed693d9afe879801706eb6a659afadea9406346a00\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:03:37.886721 kubelet[2655]: E0813 00:03:37.886648 2655 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"28c6889dec5f17c11b47d3ed693d9afe879801706eb6a659afadea9406346a00\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="28c6889dec5f17c11b47d3ed693d9afe879801706eb6a659afadea9406346a00" Aug 13 00:03:37.886808 kubelet[2655]: E0813 00:03:37.886727 2655 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"28c6889dec5f17c11b47d3ed693d9afe879801706eb6a659afadea9406346a00"} Aug 13 00:03:37.886808 kubelet[2655]: E0813 00:03:37.886761 2655 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3816201b-b93a-4ec2-a67a-d16b5eed4f52\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"28c6889dec5f17c11b47d3ed693d9afe879801706eb6a659afadea9406346a00\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:03:37.886808 kubelet[2655]: E0813 00:03:37.886783 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3816201b-b93a-4ec2-a67a-d16b5eed4f52\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"28c6889dec5f17c11b47d3ed693d9afe879801706eb6a659afadea9406346a00\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-ktbwq" podUID="3816201b-b93a-4ec2-a67a-d16b5eed4f52" Aug 13 00:03:37.888936 env[1583]: time="2025-08-13T00:03:37.888889052Z" level=error msg="StopPodSandbox for \"fcbc5b445f8a3fc2a088e7b5456f1dd8ee26a08e7c21eb05f71e2143986882c2\" failed" error="failed to destroy network for sandbox \"fcbc5b445f8a3fc2a088e7b5456f1dd8ee26a08e7c21eb05f71e2143986882c2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:03:37.889338 kubelet[2655]: E0813 00:03:37.889208 2655 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fcbc5b445f8a3fc2a088e7b5456f1dd8ee26a08e7c21eb05f71e2143986882c2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fcbc5b445f8a3fc2a088e7b5456f1dd8ee26a08e7c21eb05f71e2143986882c2" Aug 13 00:03:37.889338 kubelet[2655]: E0813 00:03:37.889247 2655 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fcbc5b445f8a3fc2a088e7b5456f1dd8ee26a08e7c21eb05f71e2143986882c2"} Aug 13 00:03:37.889338 kubelet[2655]: E0813 00:03:37.889289 2655 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"42ea4925-5542-4fd9-b692-9049837d3ae0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fcbc5b445f8a3fc2a088e7b5456f1dd8ee26a08e7c21eb05f71e2143986882c2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:03:37.889338 kubelet[2655]: E0813 00:03:37.889309 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"42ea4925-5542-4fd9-b692-9049837d3ae0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fcbc5b445f8a3fc2a088e7b5456f1dd8ee26a08e7c21eb05f71e2143986882c2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-88f556f98-v2t2q" podUID="42ea4925-5542-4fd9-b692-9049837d3ae0" Aug 13 00:03:37.893578 env[1583]: time="2025-08-13T00:03:37.893533733Z" level=error msg="StopPodSandbox for \"8e4d5e54bd38f42e4141c2ebfbd61aaae12d6dfe133d0116ce035461054e071d\" failed" error="failed to destroy network for sandbox \"8e4d5e54bd38f42e4141c2ebfbd61aaae12d6dfe133d0116ce035461054e071d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:03:37.893978 kubelet[2655]: E0813 00:03:37.893905 2655 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8e4d5e54bd38f42e4141c2ebfbd61aaae12d6dfe133d0116ce035461054e071d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8e4d5e54bd38f42e4141c2ebfbd61aaae12d6dfe133d0116ce035461054e071d" Aug 13 00:03:37.893978 kubelet[2655]: E0813 00:03:37.893953 2655 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8e4d5e54bd38f42e4141c2ebfbd61aaae12d6dfe133d0116ce035461054e071d"} Aug 13 00:03:37.894222 kubelet[2655]: E0813 00:03:37.893983 2655 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ec1ef1a0-b4ac-42a1-9532-047e283102fa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8e4d5e54bd38f42e4141c2ebfbd61aaae12d6dfe133d0116ce035461054e071d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:03:37.894222 kubelet[2655]: E0813 00:03:37.894003 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ec1ef1a0-b4ac-42a1-9532-047e283102fa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8e4d5e54bd38f42e4141c2ebfbd61aaae12d6dfe133d0116ce035461054e071d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-kcl6d" podUID="ec1ef1a0-b4ac-42a1-9532-047e283102fa" Aug 13 00:03:37.895805 env[1583]: time="2025-08-13T00:03:37.895768493Z" level=error msg="StopPodSandbox for \"939ded7457fd75b953830e6e1bc1c61feaa141835f32f937f88b682ada815021\" failed" error="failed to destroy network for sandbox \"939ded7457fd75b953830e6e1bc1c61feaa141835f32f937f88b682ada815021\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:03:37.896168 kubelet[2655]: E0813 00:03:37.896048 2655 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"939ded7457fd75b953830e6e1bc1c61feaa141835f32f937f88b682ada815021\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="939ded7457fd75b953830e6e1bc1c61feaa141835f32f937f88b682ada815021" Aug 13 00:03:37.896168 kubelet[2655]: E0813 00:03:37.896083 2655 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"939ded7457fd75b953830e6e1bc1c61feaa141835f32f937f88b682ada815021"} Aug 13 00:03:37.896168 kubelet[2655]: E0813 00:03:37.896108 2655 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1d5bf07f-d99d-4674-bcbd-ba9b13b0fbd9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"939ded7457fd75b953830e6e1bc1c61feaa141835f32f937f88b682ada815021\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:03:37.896168 kubelet[2655]: E0813 00:03:37.896142 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1d5bf07f-d99d-4674-bcbd-ba9b13b0fbd9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"939ded7457fd75b953830e6e1bc1c61feaa141835f32f937f88b682ada815021\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-576869b9dc-vzbtv" podUID="1d5bf07f-d99d-4674-bcbd-ba9b13b0fbd9" Aug 13 00:03:37.896538 env[1583]: time="2025-08-13T00:03:37.896486654Z" level=error msg="StopPodSandbox for \"4e6c7c3adfa6c38531ec378d3b31a0f6a045dbba261f9b907a45ea913661b797\" failed" error="failed to destroy network for sandbox \"4e6c7c3adfa6c38531ec378d3b31a0f6a045dbba261f9b907a45ea913661b797\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:03:37.896742 kubelet[2655]: E0813 00:03:37.896705 2655 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4e6c7c3adfa6c38531ec378d3b31a0f6a045dbba261f9b907a45ea913661b797\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4e6c7c3adfa6c38531ec378d3b31a0f6a045dbba261f9b907a45ea913661b797" Aug 13 00:03:37.896812 kubelet[2655]: E0813 00:03:37.896749 2655 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4e6c7c3adfa6c38531ec378d3b31a0f6a045dbba261f9b907a45ea913661b797"} Aug 13 00:03:37.896812 kubelet[2655]: E0813 00:03:37.896778 2655 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"77ef7b45-3d42-4c7d-b3d2-5b91108fefb3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4e6c7c3adfa6c38531ec378d3b31a0f6a045dbba261f9b907a45ea913661b797\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:03:37.896812 kubelet[2655]: E0813 00:03:37.896797 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"77ef7b45-3d42-4c7d-b3d2-5b91108fefb3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4e6c7c3adfa6c38531ec378d3b31a0f6a045dbba261f9b907a45ea913661b797\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f97f8f466-rqg7k" podUID="77ef7b45-3d42-4c7d-b3d2-5b91108fefb3" Aug 13 00:03:41.859605 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount487163404.mount: Deactivated successfully. Aug 13 00:03:42.369284 env[1583]: time="2025-08-13T00:03:42.369237384Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:42.375370 env[1583]: time="2025-08-13T00:03:42.375332945Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:42.378186 env[1583]: time="2025-08-13T00:03:42.378156906Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:42.381017 env[1583]: time="2025-08-13T00:03:42.380989386Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:42.381341 env[1583]: time="2025-08-13T00:03:42.381304946Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Aug 13 00:03:42.401224 env[1583]: time="2025-08-13T00:03:42.401182430Z" level=info msg="CreateContainer within sandbox \"7e1009dbe17cd6a165e32672074587685f382525aed7f29785e6da4af1e14b25\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 13 00:03:42.428457 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3564114631.mount: Deactivated successfully. Aug 13 00:03:42.442654 env[1583]: time="2025-08-13T00:03:42.442569198Z" level=info msg="CreateContainer within sandbox \"7e1009dbe17cd6a165e32672074587685f382525aed7f29785e6da4af1e14b25\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c737d97130b318ae81de8ac79f780f355b3ab9c9c6150d75fecb1770449ecc14\"" Aug 13 00:03:42.443439 env[1583]: time="2025-08-13T00:03:42.443391358Z" level=info msg="StartContainer for \"c737d97130b318ae81de8ac79f780f355b3ab9c9c6150d75fecb1770449ecc14\"" Aug 13 00:03:42.501287 env[1583]: time="2025-08-13T00:03:42.501236808Z" level=info msg="StartContainer for \"c737d97130b318ae81de8ac79f780f355b3ab9c9c6150d75fecb1770449ecc14\" returns successfully" Aug 13 00:03:42.741979 kubelet[2655]: I0813 00:03:42.741920 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-kdqm4" podStartSLOduration=1.569129093 podStartE2EDuration="16.741903732s" podCreationTimestamp="2025-08-13 00:03:26 +0000 UTC" firstStartedPulling="2025-08-13 00:03:27.209788348 +0000 UTC m=+24.888687054" lastFinishedPulling="2025-08-13 00:03:42.382562947 +0000 UTC m=+40.061461693" observedRunningTime="2025-08-13 00:03:42.741378452 +0000 UTC m=+40.420277158" watchObservedRunningTime="2025-08-13 00:03:42.741903732 +0000 UTC m=+40.420802478" Aug 13 00:03:43.444474 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 13 00:03:43.444606 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 13 00:03:43.582323 env[1583]: time="2025-08-13T00:03:43.582271004Z" level=info msg="StopPodSandbox for \"fcbc5b445f8a3fc2a088e7b5456f1dd8ee26a08e7c21eb05f71e2143986882c2\"" Aug 13 00:03:43.710065 env[1583]: 2025-08-13 00:03:43.658 [INFO][3873] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fcbc5b445f8a3fc2a088e7b5456f1dd8ee26a08e7c21eb05f71e2143986882c2" Aug 13 00:03:43.710065 env[1583]: 2025-08-13 00:03:43.658 [INFO][3873] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fcbc5b445f8a3fc2a088e7b5456f1dd8ee26a08e7c21eb05f71e2143986882c2" iface="eth0" netns="/var/run/netns/cni-4b5605c5-3c87-7366-f1e5-1ed9affb26a0" Aug 13 00:03:43.710065 env[1583]: 2025-08-13 00:03:43.658 [INFO][3873] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fcbc5b445f8a3fc2a088e7b5456f1dd8ee26a08e7c21eb05f71e2143986882c2" iface="eth0" netns="/var/run/netns/cni-4b5605c5-3c87-7366-f1e5-1ed9affb26a0" Aug 13 00:03:43.710065 env[1583]: 2025-08-13 00:03:43.659 [INFO][3873] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fcbc5b445f8a3fc2a088e7b5456f1dd8ee26a08e7c21eb05f71e2143986882c2" iface="eth0" netns="/var/run/netns/cni-4b5605c5-3c87-7366-f1e5-1ed9affb26a0" Aug 13 00:03:43.710065 env[1583]: 2025-08-13 00:03:43.659 [INFO][3873] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fcbc5b445f8a3fc2a088e7b5456f1dd8ee26a08e7c21eb05f71e2143986882c2" Aug 13 00:03:43.710065 env[1583]: 2025-08-13 00:03:43.659 [INFO][3873] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fcbc5b445f8a3fc2a088e7b5456f1dd8ee26a08e7c21eb05f71e2143986882c2" Aug 13 00:03:43.710065 env[1583]: 2025-08-13 00:03:43.694 [INFO][3881] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fcbc5b445f8a3fc2a088e7b5456f1dd8ee26a08e7c21eb05f71e2143986882c2" HandleID="k8s-pod-network.fcbc5b445f8a3fc2a088e7b5456f1dd8ee26a08e7c21eb05f71e2143986882c2" Workload="ci--3510.3.8--a--dd293077f6-k8s-whisker--88f556f98--v2t2q-eth0" Aug 13 00:03:43.710065 env[1583]: 2025-08-13 00:03:43.694 [INFO][3881] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:03:43.710065 env[1583]: 2025-08-13 00:03:43.694 [INFO][3881] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:03:43.710065 env[1583]: 2025-08-13 00:03:43.703 [WARNING][3881] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fcbc5b445f8a3fc2a088e7b5456f1dd8ee26a08e7c21eb05f71e2143986882c2" HandleID="k8s-pod-network.fcbc5b445f8a3fc2a088e7b5456f1dd8ee26a08e7c21eb05f71e2143986882c2" Workload="ci--3510.3.8--a--dd293077f6-k8s-whisker--88f556f98--v2t2q-eth0" Aug 13 00:03:43.710065 env[1583]: 2025-08-13 00:03:43.703 [INFO][3881] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fcbc5b445f8a3fc2a088e7b5456f1dd8ee26a08e7c21eb05f71e2143986882c2" HandleID="k8s-pod-network.fcbc5b445f8a3fc2a088e7b5456f1dd8ee26a08e7c21eb05f71e2143986882c2" Workload="ci--3510.3.8--a--dd293077f6-k8s-whisker--88f556f98--v2t2q-eth0" Aug 13 00:03:43.710065 env[1583]: 2025-08-13 00:03:43.705 [INFO][3881] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:03:43.710065 env[1583]: 2025-08-13 00:03:43.708 [INFO][3873] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fcbc5b445f8a3fc2a088e7b5456f1dd8ee26a08e7c21eb05f71e2143986882c2" Aug 13 00:03:43.713165 systemd[1]: run-netns-cni\x2d4b5605c5\x2d3c87\x2d7366\x2df1e5\x2d1ed9affb26a0.mount: Deactivated successfully. Aug 13 00:03:43.714328 env[1583]: time="2025-08-13T00:03:43.714288467Z" level=info msg="TearDown network for sandbox \"fcbc5b445f8a3fc2a088e7b5456f1dd8ee26a08e7c21eb05f71e2143986882c2\" successfully" Aug 13 00:03:43.714427 env[1583]: time="2025-08-13T00:03:43.714408707Z" level=info msg="StopPodSandbox for \"fcbc5b445f8a3fc2a088e7b5456f1dd8ee26a08e7c21eb05f71e2143986882c2\" returns successfully" Aug 13 00:03:43.757349 systemd[1]: run-containerd-runc-k8s.io-c737d97130b318ae81de8ac79f780f355b3ab9c9c6150d75fecb1770449ecc14-runc.By29qg.mount: Deactivated successfully. Aug 13 00:03:43.780926 kubelet[2655]: I0813 00:03:43.780204 2655 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/42ea4925-5542-4fd9-b692-9049837d3ae0-whisker-ca-bundle\") pod \"42ea4925-5542-4fd9-b692-9049837d3ae0\" (UID: \"42ea4925-5542-4fd9-b692-9049837d3ae0\") " Aug 13 00:03:43.780926 kubelet[2655]: I0813 00:03:43.780249 2655 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/42ea4925-5542-4fd9-b692-9049837d3ae0-whisker-backend-key-pair\") pod \"42ea4925-5542-4fd9-b692-9049837d3ae0\" (UID: \"42ea4925-5542-4fd9-b692-9049837d3ae0\") " Aug 13 00:03:43.780926 kubelet[2655]: I0813 00:03:43.780284 2655 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65g6x\" (UniqueName: \"kubernetes.io/projected/42ea4925-5542-4fd9-b692-9049837d3ae0-kube-api-access-65g6x\") pod \"42ea4925-5542-4fd9-b692-9049837d3ae0\" (UID: \"42ea4925-5542-4fd9-b692-9049837d3ae0\") " Aug 13 00:03:43.780926 kubelet[2655]: I0813 00:03:43.780680 2655 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42ea4925-5542-4fd9-b692-9049837d3ae0-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "42ea4925-5542-4fd9-b692-9049837d3ae0" (UID: "42ea4925-5542-4fd9-b692-9049837d3ae0"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 00:03:43.787079 kubelet[2655]: I0813 00:03:43.787047 2655 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42ea4925-5542-4fd9-b692-9049837d3ae0-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "42ea4925-5542-4fd9-b692-9049837d3ae0" (UID: "42ea4925-5542-4fd9-b692-9049837d3ae0"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 00:03:43.787520 kubelet[2655]: I0813 00:03:43.787429 2655 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42ea4925-5542-4fd9-b692-9049837d3ae0-kube-api-access-65g6x" (OuterVolumeSpecName: "kube-api-access-65g6x") pod "42ea4925-5542-4fd9-b692-9049837d3ae0" (UID: "42ea4925-5542-4fd9-b692-9049837d3ae0"). InnerVolumeSpecName "kube-api-access-65g6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 00:03:43.859892 systemd[1]: var-lib-kubelet-pods-42ea4925\x2d5542\x2d4fd9\x2db692\x2d9049837d3ae0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d65g6x.mount: Deactivated successfully. Aug 13 00:03:43.860046 systemd[1]: var-lib-kubelet-pods-42ea4925\x2d5542\x2d4fd9\x2db692\x2d9049837d3ae0-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 13 00:03:43.881266 kubelet[2655]: I0813 00:03:43.881210 2655 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/42ea4925-5542-4fd9-b692-9049837d3ae0-whisker-ca-bundle\") on node \"ci-3510.3.8-a-dd293077f6\" DevicePath \"\"" Aug 13 00:03:43.881266 kubelet[2655]: I0813 00:03:43.881257 2655 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/42ea4925-5542-4fd9-b692-9049837d3ae0-whisker-backend-key-pair\") on node \"ci-3510.3.8-a-dd293077f6\" DevicePath \"\"" Aug 13 00:03:43.881266 kubelet[2655]: I0813 00:03:43.881268 2655 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65g6x\" (UniqueName: \"kubernetes.io/projected/42ea4925-5542-4fd9-b692-9049837d3ae0-kube-api-access-65g6x\") on node \"ci-3510.3.8-a-dd293077f6\" DevicePath \"\"" Aug 13 00:03:44.862000 audit[3961]: AVC avc: denied { write } for pid=3961 comm="tee" name="fd" dev="proc" ino=25295 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 00:03:44.868935 kernel: kauditd_printk_skb: 20 callbacks suppressed Aug 13 00:03:44.868987 kernel: audit: type=1400 audit(1755043424.862:311): avc: denied { write } for pid=3961 comm="tee" name="fd" dev="proc" ino=25295 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 00:03:44.862000 audit[3961]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffcf4fe7d9 a2=241 a3=1b6 items=1 ppid=3928 pid=3961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:44.862000 audit: CWD cwd="/etc/service/enabled/cni/log" Aug 13 00:03:44.950374 kernel: audit: type=1300 audit(1755043424.862:311): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffcf4fe7d9 a2=241 a3=1b6 items=1 ppid=3928 pid=3961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:44.950520 kernel: audit: type=1307 audit(1755043424.862:311): cwd="/etc/service/enabled/cni/log" Aug 13 00:03:44.862000 audit: PATH item=0 name="/dev/fd/63" inode=24450 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:03:44.969783 kernel: audit: type=1302 audit(1755043424.862:311): item=0 name="/dev/fd/63" inode=24450 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:03:44.862000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 00:03:44.987729 kernel: audit: type=1327 audit(1755043424.862:311): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 00:03:44.991098 kubelet[2655]: I0813 00:03:44.991061 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhf4s\" (UniqueName: \"kubernetes.io/projected/81d299f9-8f68-4c3c-8b61-fb9ddec44923-kube-api-access-fhf4s\") pod \"whisker-7d66479978-gz5mz\" (UID: \"81d299f9-8f68-4c3c-8b61-fb9ddec44923\") " pod="calico-system/whisker-7d66479978-gz5mz" Aug 13 00:03:44.991489 kubelet[2655]: I0813 00:03:44.991471 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/81d299f9-8f68-4c3c-8b61-fb9ddec44923-whisker-backend-key-pair\") pod \"whisker-7d66479978-gz5mz\" (UID: \"81d299f9-8f68-4c3c-8b61-fb9ddec44923\") " pod="calico-system/whisker-7d66479978-gz5mz" Aug 13 00:03:44.991599 kubelet[2655]: I0813 00:03:44.991586 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81d299f9-8f68-4c3c-8b61-fb9ddec44923-whisker-ca-bundle\") pod \"whisker-7d66479978-gz5mz\" (UID: \"81d299f9-8f68-4c3c-8b61-fb9ddec44923\") " pod="calico-system/whisker-7d66479978-gz5mz" Aug 13 00:03:44.993680 kernel: audit: type=1400 audit(1755043424.901:312): avc: denied { write } for pid=3967 comm="tee" name="fd" dev="proc" ino=25322 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 00:03:44.901000 audit[3967]: AVC avc: denied { write } for pid=3967 comm="tee" name="fd" dev="proc" ino=25322 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 00:03:44.901000 audit[3967]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffd6b437d7 a2=241 a3=1b6 items=1 ppid=3937 pid=3967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.043487 kernel: audit: type=1300 audit(1755043424.901:312): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffd6b437d7 a2=241 a3=1b6 items=1 ppid=3937 pid=3967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:44.901000 audit: CWD cwd="/etc/service/enabled/confd/log" Aug 13 00:03:45.053561 kernel: audit: type=1307 audit(1755043424.901:312): cwd="/etc/service/enabled/confd/log" Aug 13 00:03:44.901000 audit: PATH item=0 name="/dev/fd/63" inode=25311 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:03:45.073074 kernel: audit: type=1302 audit(1755043424.901:312): item=0 name="/dev/fd/63" inode=25311 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:03:44.901000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 00:03:45.091895 kernel: audit: type=1327 audit(1755043424.901:312): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 00:03:44.939000 audit[3990]: AVC avc: denied { write } for pid=3990 comm="tee" name="fd" dev="proc" ino=24469 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 00:03:44.939000 audit[3990]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffd90237c8 a2=241 a3=1b6 items=1 ppid=3935 pid=3990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:44.939000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Aug 13 00:03:44.939000 audit: PATH item=0 name="/dev/fd/63" inode=24466 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:03:44.939000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 00:03:44.940000 audit[3988]: AVC avc: denied { write } for pid=3988 comm="tee" name="fd" dev="proc" ino=24473 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 00:03:44.940000 audit[3988]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffd4f6c7d7 a2=241 a3=1b6 items=1 ppid=3946 pid=3988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:44.940000 audit: CWD cwd="/etc/service/enabled/bird6/log" Aug 13 00:03:44.940000 audit: PATH item=0 name="/dev/fd/63" inode=24465 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:03:44.940000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 00:03:44.965000 audit[3993]: AVC avc: denied { write } for pid=3993 comm="tee" name="fd" dev="proc" ino=25347 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 00:03:44.965000 audit[3993]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffe6a9c7d7 a2=241 a3=1b6 items=1 ppid=3939 pid=3993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:44.965000 audit: CWD cwd="/etc/service/enabled/felix/log" Aug 13 00:03:44.965000 audit: PATH item=0 name="/dev/fd/63" inode=25338 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:03:44.965000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 00:03:44.991000 audit[3998]: AVC avc: denied { write } for pid=3998 comm="tee" name="fd" dev="proc" ino=24477 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 00:03:44.991000 audit[3998]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffcd4477d8 a2=241 a3=1b6 items=1 ppid=3932 pid=3998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:44.991000 audit: CWD cwd="/etc/service/enabled/bird/log" Aug 13 00:03:44.991000 audit: PATH item=0 name="/dev/fd/63" inode=25344 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:03:44.991000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 00:03:44.992000 audit[3996]: AVC avc: denied { write } for pid=3996 comm="tee" name="fd" dev="proc" ino=24481 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 00:03:44.992000 audit[3996]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffedec27c7 a2=241 a3=1b6 items=1 ppid=3930 pid=3996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:44.992000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Aug 13 00:03:44.992000 audit: PATH item=0 name="/dev/fd/63" inode=25343 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:03:44.992000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 00:03:45.301000 audit[4028]: AVC avc: denied { bpf } for pid=4028 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.301000 audit[4028]: AVC avc: denied { bpf } for pid=4028 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.301000 audit[4028]: AVC avc: denied { perfmon } for pid=4028 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.301000 audit[4028]: AVC avc: denied { perfmon } for pid=4028 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.301000 audit[4028]: AVC avc: denied { perfmon } for pid=4028 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.301000 audit[4028]: AVC avc: denied { perfmon } for pid=4028 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.301000 audit[4028]: AVC avc: denied { perfmon } for pid=4028 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.301000 audit[4028]: AVC avc: denied { bpf } for pid=4028 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.301000 audit[4028]: AVC avc: denied { bpf } for pid=4028 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.301000 audit: BPF prog-id=10 op=LOAD Aug 13 00:03:45.301000 audit[4028]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe4c56998 a2=98 a3=ffffe4c56988 items=0 ppid=3940 pid=4028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.301000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Aug 13 00:03:45.302000 audit: BPF prog-id=10 op=UNLOAD Aug 13 00:03:45.302000 audit[4028]: AVC avc: denied { bpf } for pid=4028 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.302000 audit[4028]: AVC avc: denied { bpf } for pid=4028 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.302000 audit[4028]: AVC avc: denied { perfmon } for pid=4028 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.302000 audit[4028]: AVC avc: denied { perfmon } for pid=4028 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.302000 audit[4028]: AVC avc: denied { perfmon } for pid=4028 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.302000 audit[4028]: AVC avc: denied { perfmon } for pid=4028 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.302000 audit[4028]: AVC avc: denied { perfmon } for pid=4028 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.302000 audit[4028]: AVC avc: denied { bpf } for pid=4028 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.302000 audit[4028]: AVC avc: denied { bpf } for pid=4028 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.302000 audit: BPF prog-id=11 op=LOAD Aug 13 00:03:45.302000 audit[4028]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe4c56848 a2=74 a3=95 items=0 ppid=3940 pid=4028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.302000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Aug 13 00:03:45.302000 audit: BPF prog-id=11 op=UNLOAD Aug 13 00:03:45.302000 audit[4028]: AVC avc: denied { bpf } for pid=4028 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.302000 audit[4028]: AVC avc: denied { bpf } for pid=4028 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.302000 audit[4028]: AVC avc: denied { perfmon } for pid=4028 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.302000 audit[4028]: AVC avc: denied { perfmon } for pid=4028 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.302000 audit[4028]: AVC avc: denied { perfmon } for pid=4028 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.302000 audit[4028]: AVC avc: denied { perfmon } for pid=4028 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.302000 audit[4028]: AVC avc: denied { perfmon } for pid=4028 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.302000 audit[4028]: AVC avc: denied { bpf } for pid=4028 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.302000 audit[4028]: AVC avc: denied { bpf } for pid=4028 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.302000 audit: BPF prog-id=12 op=LOAD Aug 13 00:03:45.302000 audit[4028]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe4c56878 a2=40 a3=ffffe4c568a8 items=0 ppid=3940 pid=4028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.302000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Aug 13 00:03:45.302000 audit: BPF prog-id=12 op=UNLOAD Aug 13 00:03:45.302000 audit[4028]: AVC avc: denied { perfmon } for pid=4028 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.302000 audit[4028]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=0 a1=ffffe4c56990 a2=50 a3=0 items=0 ppid=3940 pid=4028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.302000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Aug 13 00:03:45.304000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.304000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.304000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.304000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.304000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.304000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.304000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.304000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.304000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.304000 audit: BPF prog-id=13 op=LOAD Aug 13 00:03:45.304000 audit[4032]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffde672278 a2=98 a3=ffffde672268 items=0 ppid=3940 pid=4032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.304000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:03:45.305000 audit: BPF prog-id=13 op=UNLOAD Aug 13 00:03:45.305000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.305000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.305000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.305000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.305000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.305000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.305000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.305000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.305000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.305000 audit: BPF prog-id=14 op=LOAD Aug 13 00:03:45.305000 audit[4032]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffde671f08 a2=74 a3=95 items=0 ppid=3940 pid=4032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.305000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:03:45.305000 audit: BPF prog-id=14 op=UNLOAD Aug 13 00:03:45.305000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.305000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.305000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.305000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.305000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.305000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.305000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.305000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.305000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.305000 audit: BPF prog-id=15 op=LOAD Aug 13 00:03:45.305000 audit[4032]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffde671f68 a2=94 a3=2 items=0 ppid=3940 pid=4032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.305000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:03:45.306000 audit: BPF prog-id=15 op=UNLOAD Aug 13 00:03:45.394544 env[1583]: time="2025-08-13T00:03:45.394486844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d66479978-gz5mz,Uid:81d299f9-8f68-4c3c-8b61-fb9ddec44923,Namespace:calico-system,Attempt:0,}" Aug 13 00:03:45.404000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.404000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.404000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.404000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.404000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.404000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.404000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.404000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.404000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.404000 audit: BPF prog-id=16 op=LOAD Aug 13 00:03:45.404000 audit[4032]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffde671f28 a2=40 a3=ffffde671f58 items=0 ppid=3940 pid=4032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.404000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:03:45.404000 audit: BPF prog-id=16 op=UNLOAD Aug 13 00:03:45.404000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.404000 audit[4032]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=0 a1=ffffde672040 a2=50 a3=0 items=0 ppid=3940 pid=4032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.404000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:03:45.414000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.414000 audit[4032]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffde671f98 a2=28 a3=ffffde6720c8 items=0 ppid=3940 pid=4032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.414000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:03:45.414000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.414000 audit[4032]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffde671fc8 a2=28 a3=ffffde6720f8 items=0 ppid=3940 pid=4032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.414000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:03:45.414000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.414000 audit[4032]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffde671e78 a2=28 a3=ffffde671fa8 items=0 ppid=3940 pid=4032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.414000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:03:45.414000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.414000 audit[4032]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffde671fe8 a2=28 a3=ffffde672118 items=0 ppid=3940 pid=4032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.414000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:03:45.414000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.414000 audit[4032]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffde671fc8 a2=28 a3=ffffde6720f8 items=0 ppid=3940 pid=4032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.414000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:03:45.414000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.414000 audit[4032]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffde671fb8 a2=28 a3=ffffde6720e8 items=0 ppid=3940 pid=4032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.414000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:03:45.414000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.414000 audit[4032]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffde671fe8 a2=28 a3=ffffde672118 items=0 ppid=3940 pid=4032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.414000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:03:45.414000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.414000 audit[4032]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffde671fc8 a2=28 a3=ffffde6720f8 items=0 ppid=3940 pid=4032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.414000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:03:45.414000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.414000 audit[4032]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffde671fe8 a2=28 a3=ffffde672118 items=0 ppid=3940 pid=4032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.414000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:03:45.414000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.414000 audit[4032]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffde671fb8 a2=28 a3=ffffde6720e8 items=0 ppid=3940 pid=4032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.414000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:03:45.414000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.414000 audit[4032]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffde672038 a2=28 a3=ffffde672178 items=0 ppid=3940 pid=4032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.414000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:03:45.414000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.414000 audit[4032]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffde671d70 a2=50 a3=0 items=0 ppid=3940 pid=4032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.414000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:03:45.414000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.414000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.414000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.414000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.414000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.414000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.414000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.414000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.414000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.414000 audit: BPF prog-id=17 op=LOAD Aug 13 00:03:45.414000 audit[4032]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffde671d78 a2=94 a3=5 items=0 ppid=3940 pid=4032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.414000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:03:45.414000 audit: BPF prog-id=17 op=UNLOAD Aug 13 00:03:45.414000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.414000 audit[4032]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffde671e80 a2=50 a3=0 items=0 ppid=3940 pid=4032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.414000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:03:45.414000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.414000 audit[4032]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=16 a1=ffffde671fc8 a2=4 a3=3 items=0 ppid=3940 pid=4032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.414000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:03:45.414000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.414000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.414000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.414000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.414000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.414000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.414000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.414000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.414000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.414000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.414000 audit[4032]: AVC avc: denied { confidentiality } for pid=4032 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Aug 13 00:03:45.414000 audit[4032]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffde671fa8 a2=94 a3=6 items=0 ppid=3940 pid=4032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.414000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:03:45.417000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.417000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.417000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.417000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.417000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.417000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.417000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.417000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.417000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.417000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.417000 audit[4032]: AVC avc: denied { confidentiality } for pid=4032 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Aug 13 00:03:45.417000 audit[4032]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffde671778 a2=94 a3=83 items=0 ppid=3940 pid=4032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.417000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:03:45.417000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.417000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.417000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.417000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.417000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.417000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.417000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.417000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.417000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.417000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.417000 audit[4032]: AVC avc: denied { confidentiality } for pid=4032 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Aug 13 00:03:45.417000 audit[4032]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffde671778 a2=94 a3=83 items=0 ppid=3940 pid=4032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.417000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:03:45.426000 audit[4043]: AVC avc: denied { bpf } for pid=4043 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.426000 audit[4043]: AVC avc: denied { bpf } for pid=4043 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.426000 audit[4043]: AVC avc: denied { perfmon } for pid=4043 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.426000 audit[4043]: AVC avc: denied { perfmon } for pid=4043 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.426000 audit[4043]: AVC avc: denied { perfmon } for pid=4043 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.426000 audit[4043]: AVC avc: denied { perfmon } for pid=4043 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.426000 audit[4043]: AVC avc: denied { perfmon } for pid=4043 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.426000 audit[4043]: AVC avc: denied { bpf } for pid=4043 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.426000 audit[4043]: AVC avc: denied { bpf } for pid=4043 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.426000 audit: BPF prog-id=18 op=LOAD Aug 13 00:03:45.426000 audit[4043]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe424ba68 a2=98 a3=ffffe424ba58 items=0 ppid=3940 pid=4043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.426000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Aug 13 00:03:45.426000 audit: BPF prog-id=18 op=UNLOAD Aug 13 00:03:45.426000 audit[4043]: AVC avc: denied { bpf } for pid=4043 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.426000 audit[4043]: AVC avc: denied { bpf } for pid=4043 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.426000 audit[4043]: AVC avc: denied { perfmon } for pid=4043 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.426000 audit[4043]: AVC avc: denied { perfmon } for pid=4043 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.426000 audit[4043]: AVC avc: denied { perfmon } for pid=4043 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.426000 audit[4043]: AVC avc: denied { perfmon } for pid=4043 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.426000 audit[4043]: AVC avc: denied { perfmon } for pid=4043 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.426000 audit[4043]: AVC avc: denied { bpf } for pid=4043 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.426000 audit[4043]: AVC avc: denied { bpf } for pid=4043 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.426000 audit: BPF prog-id=19 op=LOAD Aug 13 00:03:45.426000 audit[4043]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe424b918 a2=74 a3=95 items=0 ppid=3940 pid=4043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.426000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Aug 13 00:03:45.426000 audit: BPF prog-id=19 op=UNLOAD Aug 13 00:03:45.426000 audit[4043]: AVC avc: denied { bpf } for pid=4043 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.426000 audit[4043]: AVC avc: denied { bpf } for pid=4043 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.426000 audit[4043]: AVC avc: denied { perfmon } for pid=4043 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.426000 audit[4043]: AVC avc: denied { perfmon } for pid=4043 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.426000 audit[4043]: AVC avc: denied { perfmon } for pid=4043 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.426000 audit[4043]: AVC avc: denied { perfmon } for pid=4043 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.426000 audit[4043]: AVC avc: denied { perfmon } for pid=4043 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.426000 audit[4043]: AVC avc: denied { bpf } for pid=4043 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.426000 audit[4043]: AVC avc: denied { bpf } for pid=4043 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.426000 audit: BPF prog-id=20 op=LOAD Aug 13 00:03:45.426000 audit[4043]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe424b948 a2=40 a3=ffffe424b978 items=0 ppid=3940 pid=4043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.426000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Aug 13 00:03:45.427000 audit: BPF prog-id=20 op=UNLOAD Aug 13 00:03:45.677882 systemd-networkd[1759]: vxlan.calico: Link UP Aug 13 00:03:45.677889 systemd-networkd[1759]: vxlan.calico: Gained carrier Aug 13 00:03:45.682121 systemd-networkd[1759]: calie62f9605430: Link UP Aug 13 00:03:45.695614 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Aug 13 00:03:45.695744 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calie62f9605430: link becomes ready Aug 13 00:03:45.697435 systemd-networkd[1759]: calie62f9605430: Gained carrier Aug 13 00:03:45.716185 env[1583]: 2025-08-13 00:03:45.521 [INFO][4054] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--a--dd293077f6-k8s-whisker--7d66479978--gz5mz-eth0 whisker-7d66479978- calico-system 81d299f9-8f68-4c3c-8b61-fb9ddec44923 941 0 2025-08-13 00:03:44 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7d66479978 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-3510.3.8-a-dd293077f6 whisker-7d66479978-gz5mz eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calie62f9605430 [] [] }} ContainerID="6e70c5dabf55669137e587224e6860607b4a49ef980b40a5c1c37367037958a7" Namespace="calico-system" Pod="whisker-7d66479978-gz5mz" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-whisker--7d66479978--gz5mz-" Aug 13 00:03:45.716185 env[1583]: 2025-08-13 00:03:45.521 [INFO][4054] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6e70c5dabf55669137e587224e6860607b4a49ef980b40a5c1c37367037958a7" Namespace="calico-system" Pod="whisker-7d66479978-gz5mz" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-whisker--7d66479978--gz5mz-eth0" Aug 13 00:03:45.716185 env[1583]: 2025-08-13 00:03:45.553 [INFO][4068] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6e70c5dabf55669137e587224e6860607b4a49ef980b40a5c1c37367037958a7" HandleID="k8s-pod-network.6e70c5dabf55669137e587224e6860607b4a49ef980b40a5c1c37367037958a7" Workload="ci--3510.3.8--a--dd293077f6-k8s-whisker--7d66479978--gz5mz-eth0" Aug 13 00:03:45.716185 env[1583]: 2025-08-13 00:03:45.553 [INFO][4068] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6e70c5dabf55669137e587224e6860607b4a49ef980b40a5c1c37367037958a7" HandleID="k8s-pod-network.6e70c5dabf55669137e587224e6860607b4a49ef980b40a5c1c37367037958a7" Workload="ci--3510.3.8--a--dd293077f6-k8s-whisker--7d66479978--gz5mz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c1040), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.8-a-dd293077f6", "pod":"whisker-7d66479978-gz5mz", "timestamp":"2025-08-13 00:03:45.553363231 +0000 UTC"}, Hostname:"ci-3510.3.8-a-dd293077f6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:03:45.716185 env[1583]: 2025-08-13 00:03:45.553 [INFO][4068] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:03:45.716185 env[1583]: 2025-08-13 00:03:45.553 [INFO][4068] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:03:45.716185 env[1583]: 2025-08-13 00:03:45.553 [INFO][4068] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-a-dd293077f6' Aug 13 00:03:45.716185 env[1583]: 2025-08-13 00:03:45.570 [INFO][4068] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6e70c5dabf55669137e587224e6860607b4a49ef980b40a5c1c37367037958a7" host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:45.716185 env[1583]: 2025-08-13 00:03:45.583 [INFO][4068] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:45.716185 env[1583]: 2025-08-13 00:03:45.591 [INFO][4068] ipam/ipam.go 511: Trying affinity for 192.168.55.64/26 host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:45.716185 env[1583]: 2025-08-13 00:03:45.594 [INFO][4068] ipam/ipam.go 158: Attempting to load block cidr=192.168.55.64/26 host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:45.716185 env[1583]: 2025-08-13 00:03:45.597 [INFO][4068] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.55.64/26 host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:45.716185 env[1583]: 2025-08-13 00:03:45.597 [INFO][4068] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.55.64/26 handle="k8s-pod-network.6e70c5dabf55669137e587224e6860607b4a49ef980b40a5c1c37367037958a7" host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:45.716185 env[1583]: 2025-08-13 00:03:45.604 [INFO][4068] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6e70c5dabf55669137e587224e6860607b4a49ef980b40a5c1c37367037958a7 Aug 13 00:03:45.716185 env[1583]: 2025-08-13 00:03:45.615 [INFO][4068] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.55.64/26 handle="k8s-pod-network.6e70c5dabf55669137e587224e6860607b4a49ef980b40a5c1c37367037958a7" host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:45.716185 env[1583]: 2025-08-13 00:03:45.632 [INFO][4068] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.55.65/26] block=192.168.55.64/26 handle="k8s-pod-network.6e70c5dabf55669137e587224e6860607b4a49ef980b40a5c1c37367037958a7" host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:45.716185 env[1583]: 2025-08-13 00:03:45.632 [INFO][4068] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.55.65/26] handle="k8s-pod-network.6e70c5dabf55669137e587224e6860607b4a49ef980b40a5c1c37367037958a7" host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:45.716185 env[1583]: 2025-08-13 00:03:45.632 [INFO][4068] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:03:45.716185 env[1583]: 2025-08-13 00:03:45.632 [INFO][4068] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.55.65/26] IPv6=[] ContainerID="6e70c5dabf55669137e587224e6860607b4a49ef980b40a5c1c37367037958a7" HandleID="k8s-pod-network.6e70c5dabf55669137e587224e6860607b4a49ef980b40a5c1c37367037958a7" Workload="ci--3510.3.8--a--dd293077f6-k8s-whisker--7d66479978--gz5mz-eth0" Aug 13 00:03:45.716862 env[1583]: 2025-08-13 00:03:45.634 [INFO][4054] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6e70c5dabf55669137e587224e6860607b4a49ef980b40a5c1c37367037958a7" Namespace="calico-system" Pod="whisker-7d66479978-gz5mz" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-whisker--7d66479978--gz5mz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--a--dd293077f6-k8s-whisker--7d66479978--gz5mz-eth0", GenerateName:"whisker-7d66479978-", Namespace:"calico-system", SelfLink:"", UID:"81d299f9-8f68-4c3c-8b61-fb9ddec44923", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 3, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7d66479978", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-a-dd293077f6", ContainerID:"", Pod:"whisker-7d66479978-gz5mz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.55.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie62f9605430", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:03:45.716862 env[1583]: 2025-08-13 00:03:45.634 [INFO][4054] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.55.65/32] ContainerID="6e70c5dabf55669137e587224e6860607b4a49ef980b40a5c1c37367037958a7" Namespace="calico-system" Pod="whisker-7d66479978-gz5mz" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-whisker--7d66479978--gz5mz-eth0" Aug 13 00:03:45.716862 env[1583]: 2025-08-13 00:03:45.634 [INFO][4054] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie62f9605430 ContainerID="6e70c5dabf55669137e587224e6860607b4a49ef980b40a5c1c37367037958a7" Namespace="calico-system" Pod="whisker-7d66479978-gz5mz" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-whisker--7d66479978--gz5mz-eth0" Aug 13 00:03:45.716862 env[1583]: 2025-08-13 00:03:45.698 [INFO][4054] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6e70c5dabf55669137e587224e6860607b4a49ef980b40a5c1c37367037958a7" Namespace="calico-system" Pod="whisker-7d66479978-gz5mz" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-whisker--7d66479978--gz5mz-eth0" Aug 13 00:03:45.716862 env[1583]: 2025-08-13 00:03:45.698 [INFO][4054] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6e70c5dabf55669137e587224e6860607b4a49ef980b40a5c1c37367037958a7" Namespace="calico-system" Pod="whisker-7d66479978-gz5mz" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-whisker--7d66479978--gz5mz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--a--dd293077f6-k8s-whisker--7d66479978--gz5mz-eth0", GenerateName:"whisker-7d66479978-", Namespace:"calico-system", SelfLink:"", UID:"81d299f9-8f68-4c3c-8b61-fb9ddec44923", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 3, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7d66479978", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-a-dd293077f6", ContainerID:"6e70c5dabf55669137e587224e6860607b4a49ef980b40a5c1c37367037958a7", Pod:"whisker-7d66479978-gz5mz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.55.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie62f9605430", MAC:"f6:82:68:1b:2b:1b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:03:45.716862 env[1583]: 2025-08-13 00:03:45.712 [INFO][4054] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6e70c5dabf55669137e587224e6860607b4a49ef980b40a5c1c37367037958a7" Namespace="calico-system" Pod="whisker-7d66479978-gz5mz" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-whisker--7d66479978--gz5mz-eth0" Aug 13 00:03:45.726000 audit[4102]: AVC avc: denied { bpf } for pid=4102 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.726000 audit[4102]: AVC avc: denied { bpf } for pid=4102 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.726000 audit[4102]: AVC avc: denied { perfmon } for pid=4102 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.726000 audit[4102]: AVC avc: denied { perfmon } for pid=4102 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.726000 audit[4102]: AVC avc: denied { perfmon } for pid=4102 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.726000 audit[4102]: AVC avc: denied { perfmon } for pid=4102 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.726000 audit[4102]: AVC avc: denied { perfmon } for pid=4102 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.726000 audit[4102]: AVC avc: denied { bpf } for pid=4102 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.726000 audit[4102]: AVC avc: denied { bpf } for pid=4102 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.726000 audit: BPF prog-id=21 op=LOAD Aug 13 00:03:45.726000 audit[4102]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffffc3d48d8 a2=98 a3=fffffc3d48c8 items=0 ppid=3940 pid=4102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.726000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:03:45.726000 audit: BPF prog-id=21 op=UNLOAD Aug 13 00:03:45.726000 audit[4102]: AVC avc: denied { bpf } for pid=4102 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.726000 audit[4102]: AVC avc: denied { bpf } for pid=4102 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.726000 audit[4102]: AVC avc: denied { perfmon } for pid=4102 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.726000 audit[4102]: AVC avc: denied { perfmon } for pid=4102 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.726000 audit[4102]: AVC avc: denied { perfmon } for pid=4102 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.726000 audit[4102]: AVC avc: denied { perfmon } for pid=4102 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.726000 audit[4102]: AVC avc: denied { perfmon } for pid=4102 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.726000 audit[4102]: AVC avc: denied { bpf } for pid=4102 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.726000 audit[4102]: AVC avc: denied { bpf } for pid=4102 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.726000 audit: BPF prog-id=22 op=LOAD Aug 13 00:03:45.726000 audit[4102]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffffc3d45b8 a2=74 a3=95 items=0 ppid=3940 pid=4102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.726000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:03:45.726000 audit: BPF prog-id=22 op=UNLOAD Aug 13 00:03:45.726000 audit[4102]: AVC avc: denied { bpf } for pid=4102 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.726000 audit[4102]: AVC avc: denied { bpf } for pid=4102 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.726000 audit[4102]: AVC avc: denied { perfmon } for pid=4102 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.726000 audit[4102]: AVC avc: denied { perfmon } for pid=4102 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.726000 audit[4102]: AVC avc: denied { perfmon } for pid=4102 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.726000 audit[4102]: AVC avc: denied { perfmon } for pid=4102 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.726000 audit[4102]: AVC avc: denied { perfmon } for pid=4102 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.726000 audit[4102]: AVC avc: denied { bpf } for pid=4102 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.726000 audit[4102]: AVC avc: denied { bpf } for pid=4102 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.726000 audit: BPF prog-id=23 op=LOAD Aug 13 00:03:45.726000 audit[4102]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffffc3d4618 a2=94 a3=2 items=0 ppid=3940 pid=4102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.726000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:03:45.726000 audit: BPF prog-id=23 op=UNLOAD Aug 13 00:03:45.726000 audit[4102]: AVC avc: denied { bpf } for pid=4102 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.726000 audit[4102]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=fffffc3d4648 a2=28 a3=fffffc3d4778 items=0 ppid=3940 pid=4102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.726000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:03:45.726000 audit[4102]: AVC avc: denied { bpf } for pid=4102 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.726000 audit[4102]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffffc3d4678 a2=28 a3=fffffc3d47a8 items=0 ppid=3940 pid=4102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.726000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:03:45.726000 audit[4102]: AVC avc: denied { bpf } for pid=4102 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.726000 audit[4102]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffffc3d4528 a2=28 a3=fffffc3d4658 items=0 ppid=3940 pid=4102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.726000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:03:45.726000 audit[4102]: AVC avc: denied { bpf } for pid=4102 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.726000 audit[4102]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=fffffc3d4698 a2=28 a3=fffffc3d47c8 items=0 ppid=3940 pid=4102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.726000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:03:45.726000 audit[4102]: AVC avc: denied { bpf } for pid=4102 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.726000 audit[4102]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=fffffc3d4678 a2=28 a3=fffffc3d47a8 items=0 ppid=3940 pid=4102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.726000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:03:45.726000 audit[4102]: AVC avc: denied { bpf } for pid=4102 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.726000 audit[4102]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=fffffc3d4668 a2=28 a3=fffffc3d4798 items=0 ppid=3940 pid=4102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.726000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:03:45.726000 audit[4102]: AVC avc: denied { bpf } for pid=4102 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.726000 audit[4102]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=fffffc3d4698 a2=28 a3=fffffc3d47c8 items=0 ppid=3940 pid=4102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.726000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:03:45.726000 audit[4102]: AVC avc: denied { bpf } for pid=4102 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.726000 audit[4102]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffffc3d4678 a2=28 a3=fffffc3d47a8 items=0 ppid=3940 pid=4102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.726000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:03:45.726000 audit[4102]: AVC avc: denied { bpf } for pid=4102 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.726000 audit[4102]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffffc3d4698 a2=28 a3=fffffc3d47c8 items=0 ppid=3940 pid=4102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.726000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:03:45.727000 audit[4102]: AVC avc: denied { bpf } for pid=4102 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.727000 audit[4102]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffffc3d4668 a2=28 a3=fffffc3d4798 items=0 ppid=3940 pid=4102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.727000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:03:45.727000 audit[4102]: AVC avc: denied { bpf } for pid=4102 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.727000 audit[4102]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=fffffc3d46e8 a2=28 a3=fffffc3d4828 items=0 ppid=3940 pid=4102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.727000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:03:45.727000 audit[4102]: AVC avc: denied { bpf } for pid=4102 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.727000 audit[4102]: AVC avc: denied { bpf } for pid=4102 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.727000 audit[4102]: AVC avc: denied { perfmon } for pid=4102 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.727000 audit[4102]: AVC avc: denied { perfmon } for pid=4102 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.727000 audit[4102]: AVC avc: denied { perfmon } for pid=4102 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.727000 audit[4102]: AVC avc: denied { perfmon } for pid=4102 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.727000 audit[4102]: AVC avc: denied { perfmon } for pid=4102 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.727000 audit[4102]: AVC avc: denied { bpf } for pid=4102 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.727000 audit[4102]: AVC avc: denied { bpf } for pid=4102 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.727000 audit: BPF prog-id=24 op=LOAD Aug 13 00:03:45.727000 audit[4102]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffffc3d4508 a2=40 a3=fffffc3d4538 items=0 ppid=3940 pid=4102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.727000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:03:45.727000 audit: BPF prog-id=24 op=UNLOAD Aug 13 00:03:45.727000 audit[4102]: AVC avc: denied { bpf } for pid=4102 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.727000 audit[4102]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=0 a1=fffffc3d4530 a2=50 a3=0 items=0 ppid=3940 pid=4102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.727000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:03:45.727000 audit[4102]: AVC avc: denied { bpf } for pid=4102 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.727000 audit[4102]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=0 a1=fffffc3d4530 a2=50 a3=0 items=0 ppid=3940 pid=4102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.727000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:03:45.727000 audit[4102]: AVC avc: denied { bpf } for pid=4102 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.727000 audit[4102]: AVC avc: denied { bpf } for pid=4102 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.727000 audit[4102]: AVC avc: denied { bpf } for pid=4102 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.727000 audit[4102]: AVC avc: denied { perfmon } for pid=4102 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.727000 audit[4102]: AVC avc: denied { perfmon } for pid=4102 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.727000 audit[4102]: AVC avc: denied { perfmon } for pid=4102 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.727000 audit[4102]: AVC avc: denied { perfmon } for pid=4102 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.727000 audit[4102]: AVC avc: denied { perfmon } for pid=4102 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.727000 audit[4102]: AVC avc: denied { bpf } for pid=4102 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.727000 audit[4102]: AVC avc: denied { bpf } for pid=4102 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.727000 audit: BPF prog-id=25 op=LOAD Aug 13 00:03:45.727000 audit[4102]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffffc3d3c98 a2=94 a3=2 items=0 ppid=3940 pid=4102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.727000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:03:45.727000 audit: BPF prog-id=25 op=UNLOAD Aug 13 00:03:45.727000 audit[4102]: AVC avc: denied { bpf } for pid=4102 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.727000 audit[4102]: AVC avc: denied { bpf } for pid=4102 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.727000 audit[4102]: AVC avc: denied { bpf } for pid=4102 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.727000 audit[4102]: AVC avc: denied { perfmon } for pid=4102 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.727000 audit[4102]: AVC avc: denied { perfmon } for pid=4102 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.727000 audit[4102]: AVC avc: denied { perfmon } for pid=4102 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.727000 audit[4102]: AVC avc: denied { perfmon } for pid=4102 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.727000 audit[4102]: AVC avc: denied { perfmon } for pid=4102 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.727000 audit[4102]: AVC avc: denied { bpf } for pid=4102 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.727000 audit[4102]: AVC avc: denied { bpf } for pid=4102 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.727000 audit: BPF prog-id=26 op=LOAD Aug 13 00:03:45.727000 audit[4102]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffffc3d3e28 a2=94 a3=30 items=0 ppid=3940 pid=4102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.727000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:03:45.730000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.730000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.730000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.730000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.730000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.730000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.730000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.730000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.730000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.730000 audit: BPF prog-id=27 op=LOAD Aug 13 00:03:45.730000 audit[4104]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd3934a08 a2=98 a3=ffffd39349f8 items=0 ppid=3940 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.730000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:03:45.730000 audit: BPF prog-id=27 op=UNLOAD Aug 13 00:03:45.730000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.730000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.730000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.730000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.730000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.730000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.730000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.730000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.730000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.730000 audit: BPF prog-id=28 op=LOAD Aug 13 00:03:45.730000 audit[4104]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd3934698 a2=74 a3=95 items=0 ppid=3940 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.730000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:03:45.730000 audit: BPF prog-id=28 op=UNLOAD Aug 13 00:03:45.730000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.730000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.730000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.730000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.730000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.730000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.730000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.730000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.730000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.730000 audit: BPF prog-id=29 op=LOAD Aug 13 00:03:45.730000 audit[4104]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd39346f8 a2=94 a3=2 items=0 ppid=3940 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.730000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:03:45.730000 audit: BPF prog-id=29 op=UNLOAD Aug 13 00:03:45.772694 env[1583]: time="2025-08-13T00:03:45.772598269Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:03:45.772694 env[1583]: time="2025-08-13T00:03:45.772641749Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:03:45.772694 env[1583]: time="2025-08-13T00:03:45.772670829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:03:45.773840 env[1583]: time="2025-08-13T00:03:45.773017989Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6e70c5dabf55669137e587224e6860607b4a49ef980b40a5c1c37367037958a7 pid=4108 runtime=io.containerd.runc.v2 Aug 13 00:03:45.848581 env[1583]: time="2025-08-13T00:03:45.848501523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d66479978-gz5mz,Uid:81d299f9-8f68-4c3c-8b61-fb9ddec44923,Namespace:calico-system,Attempt:0,} returns sandbox id \"6e70c5dabf55669137e587224e6860607b4a49ef980b40a5c1c37367037958a7\"" Aug 13 00:03:45.851805 env[1583]: time="2025-08-13T00:03:45.851771723Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Aug 13 00:03:45.862000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.862000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.862000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.862000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.862000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.862000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.862000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.862000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.862000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.862000 audit: BPF prog-id=30 op=LOAD Aug 13 00:03:45.862000 audit[4104]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd39346b8 a2=40 a3=ffffd39346e8 items=0 ppid=3940 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.862000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:03:45.862000 audit: BPF prog-id=30 op=UNLOAD Aug 13 00:03:45.862000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.862000 audit[4104]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=0 a1=ffffd39347d0 a2=50 a3=0 items=0 ppid=3940 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.862000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:03:45.871000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.871000 audit[4104]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffd3934728 a2=28 a3=ffffd3934858 items=0 ppid=3940 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.871000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:03:45.871000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.871000 audit[4104]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffd3934758 a2=28 a3=ffffd3934888 items=0 ppid=3940 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.871000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:03:45.871000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.871000 audit[4104]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffd3934608 a2=28 a3=ffffd3934738 items=0 ppid=3940 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.871000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:03:45.871000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.871000 audit[4104]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffd3934778 a2=28 a3=ffffd39348a8 items=0 ppid=3940 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.871000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:03:45.871000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.871000 audit[4104]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffd3934758 a2=28 a3=ffffd3934888 items=0 ppid=3940 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.871000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:03:45.871000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.871000 audit[4104]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffd3934748 a2=28 a3=ffffd3934878 items=0 ppid=3940 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.871000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:03:45.871000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.871000 audit[4104]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffd3934778 a2=28 a3=ffffd39348a8 items=0 ppid=3940 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.871000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:03:45.871000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.871000 audit[4104]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffd3934758 a2=28 a3=ffffd3934888 items=0 ppid=3940 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.871000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:03:45.871000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.871000 audit[4104]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffd3934778 a2=28 a3=ffffd39348a8 items=0 ppid=3940 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.871000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:03:45.871000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.871000 audit[4104]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffd3934748 a2=28 a3=ffffd3934878 items=0 ppid=3940 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.871000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:03:45.871000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.871000 audit[4104]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffd39347c8 a2=28 a3=ffffd3934908 items=0 ppid=3940 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.871000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:03:45.871000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.871000 audit[4104]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffd3934500 a2=50 a3=0 items=0 ppid=3940 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.871000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:03:45.871000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.871000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.871000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.871000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.871000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.871000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.871000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.871000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.871000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.871000 audit: BPF prog-id=31 op=LOAD Aug 13 00:03:45.871000 audit[4104]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd3934508 a2=94 a3=5 items=0 ppid=3940 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.871000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:03:45.871000 audit: BPF prog-id=31 op=UNLOAD Aug 13 00:03:45.871000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.871000 audit[4104]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffd3934610 a2=50 a3=0 items=0 ppid=3940 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.871000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:03:45.871000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.871000 audit[4104]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=16 a1=ffffd3934758 a2=4 a3=3 items=0 ppid=3940 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.871000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:03:45.871000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.871000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.871000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.871000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.871000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.871000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.871000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.871000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.871000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.871000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.871000 audit[4104]: AVC avc: denied { confidentiality } for pid=4104 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Aug 13 00:03:45.871000 audit[4104]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffd3934738 a2=94 a3=6 items=0 ppid=3940 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.871000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:03:45.872000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.872000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.872000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.872000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.872000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.872000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.872000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.872000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.872000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.872000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.872000 audit[4104]: AVC avc: denied { confidentiality } for pid=4104 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Aug 13 00:03:45.872000 audit[4104]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffd3933f08 a2=94 a3=83 items=0 ppid=3940 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.872000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:03:45.872000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.872000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.872000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.872000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.872000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.872000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.872000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.872000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.872000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.872000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.872000 audit[4104]: AVC avc: denied { confidentiality } for pid=4104 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Aug 13 00:03:45.872000 audit[4104]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffd3933f08 a2=94 a3=83 items=0 ppid=3940 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.872000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:03:45.872000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.872000 audit[4104]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffd3935948 a2=10 a3=ffffd3935a38 items=0 ppid=3940 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.872000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:03:45.872000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.872000 audit[4104]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffd3935808 a2=10 a3=ffffd39358f8 items=0 ppid=3940 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.872000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:03:45.872000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.872000 audit[4104]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffd3935778 a2=10 a3=ffffd39358f8 items=0 ppid=3940 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.872000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:03:45.872000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:03:45.872000 audit[4104]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffd3935778 a2=10 a3=ffffd39358f8 items=0 ppid=3940 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:45.872000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:03:45.879000 audit: BPF prog-id=26 op=UNLOAD Aug 13 00:03:46.046000 audit[4170]: NETFILTER_CFG table=mangle:106 family=2 entries=16 op=nft_register_chain pid=4170 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:03:46.046000 audit[4170]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffd2f5eb40 a2=0 a3=ffffbe17efa8 items=0 ppid=3940 pid=4170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:46.046000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:03:46.075000 audit[4168]: NETFILTER_CFG table=nat:107 family=2 entries=15 op=nft_register_chain pid=4168 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:03:46.075000 audit[4168]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffe1ba6fb0 a2=0 a3=ffffbb7f8fa8 items=0 ppid=3940 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:46.075000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:03:46.081000 audit[4171]: NETFILTER_CFG table=filter:108 family=2 entries=39 op=nft_register_chain pid=4171 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:03:46.081000 audit[4171]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=18968 a0=3 a1=ffffe68f1470 a2=0 a3=ffff9f610fa8 items=0 ppid=3940 pid=4171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:46.081000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:03:46.225000 audit[4169]: NETFILTER_CFG table=raw:109 family=2 entries=21 op=nft_register_chain pid=4169 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:03:46.225000 audit[4169]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=ffffc5a47c60 a2=0 a3=ffffbf1b6fa8 items=0 ppid=3940 pid=4169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:46.225000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:03:46.266000 audit[4181]: NETFILTER_CFG table=filter:110 family=2 entries=59 op=nft_register_chain pid=4181 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:03:46.266000 audit[4181]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=35860 a0=3 a1=fffff6061090 a2=0 a3=ffffb66befa8 items=0 ppid=3940 pid=4181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:46.266000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:03:46.494165 kubelet[2655]: I0813 00:03:46.494123 2655 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42ea4925-5542-4fd9-b692-9049837d3ae0" path="/var/lib/kubelet/pods/42ea4925-5542-4fd9-b692-9049837d3ae0/volumes" Aug 13 00:03:47.127089 env[1583]: time="2025-08-13T00:03:47.127036101Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:47.132654 env[1583]: time="2025-08-13T00:03:47.132611582Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:47.136392 env[1583]: time="2025-08-13T00:03:47.136341583Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:47.143432 env[1583]: time="2025-08-13T00:03:47.143396344Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:47.144054 env[1583]: time="2025-08-13T00:03:47.144019104Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Aug 13 00:03:47.147395 env[1583]: time="2025-08-13T00:03:47.146822864Z" level=info msg="CreateContainer within sandbox \"6e70c5dabf55669137e587224e6860607b4a49ef980b40a5c1c37367037958a7\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Aug 13 00:03:47.168729 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount867304974.mount: Deactivated successfully. Aug 13 00:03:47.179719 env[1583]: time="2025-08-13T00:03:47.179673110Z" level=info msg="CreateContainer within sandbox \"6e70c5dabf55669137e587224e6860607b4a49ef980b40a5c1c37367037958a7\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"e11f0af445bdd509655b44c8848fc634af68aef28e2a7e08fdf69a7f28168e00\"" Aug 13 00:03:47.180728 env[1583]: time="2025-08-13T00:03:47.180698070Z" level=info msg="StartContainer for \"e11f0af445bdd509655b44c8848fc634af68aef28e2a7e08fdf69a7f28168e00\"" Aug 13 00:03:47.249312 env[1583]: time="2025-08-13T00:03:47.248867682Z" level=info msg="StartContainer for \"e11f0af445bdd509655b44c8848fc634af68aef28e2a7e08fdf69a7f28168e00\" returns successfully" Aug 13 00:03:47.251951 env[1583]: time="2025-08-13T00:03:47.251899402Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Aug 13 00:03:47.341814 systemd-networkd[1759]: vxlan.calico: Gained IPv6LL Aug 13 00:03:47.662340 systemd-networkd[1759]: calie62f9605430: Gained IPv6LL Aug 13 00:03:48.492175 env[1583]: time="2025-08-13T00:03:48.492138929Z" level=info msg="StopPodSandbox for \"1d4d0802d55401c4f07699d1e971d25cf3e5fc33dddc2477fff34bc92283e502\"" Aug 13 00:03:48.628540 env[1583]: 2025-08-13 00:03:48.576 [INFO][4238] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1d4d0802d55401c4f07699d1e971d25cf3e5fc33dddc2477fff34bc92283e502" Aug 13 00:03:48.628540 env[1583]: 2025-08-13 00:03:48.576 [INFO][4238] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1d4d0802d55401c4f07699d1e971d25cf3e5fc33dddc2477fff34bc92283e502" iface="eth0" netns="/var/run/netns/cni-aba7ccd9-029d-ef4e-aa2a-1bccbf0dd243" Aug 13 00:03:48.628540 env[1583]: 2025-08-13 00:03:48.576 [INFO][4238] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1d4d0802d55401c4f07699d1e971d25cf3e5fc33dddc2477fff34bc92283e502" iface="eth0" netns="/var/run/netns/cni-aba7ccd9-029d-ef4e-aa2a-1bccbf0dd243" Aug 13 00:03:48.628540 env[1583]: 2025-08-13 00:03:48.576 [INFO][4238] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1d4d0802d55401c4f07699d1e971d25cf3e5fc33dddc2477fff34bc92283e502" iface="eth0" netns="/var/run/netns/cni-aba7ccd9-029d-ef4e-aa2a-1bccbf0dd243" Aug 13 00:03:48.628540 env[1583]: 2025-08-13 00:03:48.576 [INFO][4238] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1d4d0802d55401c4f07699d1e971d25cf3e5fc33dddc2477fff34bc92283e502" Aug 13 00:03:48.628540 env[1583]: 2025-08-13 00:03:48.576 [INFO][4238] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1d4d0802d55401c4f07699d1e971d25cf3e5fc33dddc2477fff34bc92283e502" Aug 13 00:03:48.628540 env[1583]: 2025-08-13 00:03:48.613 [INFO][4246] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1d4d0802d55401c4f07699d1e971d25cf3e5fc33dddc2477fff34bc92283e502" HandleID="k8s-pod-network.1d4d0802d55401c4f07699d1e971d25cf3e5fc33dddc2477fff34bc92283e502" Workload="ci--3510.3.8--a--dd293077f6-k8s-coredns--7c65d6cfc9--gfdrg-eth0" Aug 13 00:03:48.628540 env[1583]: 2025-08-13 00:03:48.614 [INFO][4246] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:03:48.628540 env[1583]: 2025-08-13 00:03:48.614 [INFO][4246] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:03:48.628540 env[1583]: 2025-08-13 00:03:48.623 [WARNING][4246] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1d4d0802d55401c4f07699d1e971d25cf3e5fc33dddc2477fff34bc92283e502" HandleID="k8s-pod-network.1d4d0802d55401c4f07699d1e971d25cf3e5fc33dddc2477fff34bc92283e502" Workload="ci--3510.3.8--a--dd293077f6-k8s-coredns--7c65d6cfc9--gfdrg-eth0" Aug 13 00:03:48.628540 env[1583]: 2025-08-13 00:03:48.623 [INFO][4246] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1d4d0802d55401c4f07699d1e971d25cf3e5fc33dddc2477fff34bc92283e502" HandleID="k8s-pod-network.1d4d0802d55401c4f07699d1e971d25cf3e5fc33dddc2477fff34bc92283e502" Workload="ci--3510.3.8--a--dd293077f6-k8s-coredns--7c65d6cfc9--gfdrg-eth0" Aug 13 00:03:48.628540 env[1583]: 2025-08-13 00:03:48.625 [INFO][4246] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:03:48.628540 env[1583]: 2025-08-13 00:03:48.627 [INFO][4238] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1d4d0802d55401c4f07699d1e971d25cf3e5fc33dddc2477fff34bc92283e502" Aug 13 00:03:48.632569 env[1583]: time="2025-08-13T00:03:48.632411553Z" level=info msg="TearDown network for sandbox \"1d4d0802d55401c4f07699d1e971d25cf3e5fc33dddc2477fff34bc92283e502\" successfully" Aug 13 00:03:48.632569 env[1583]: time="2025-08-13T00:03:48.632451113Z" level=info msg="StopPodSandbox for \"1d4d0802d55401c4f07699d1e971d25cf3e5fc33dddc2477fff34bc92283e502\" returns successfully" Aug 13 00:03:48.631296 systemd[1]: run-netns-cni\x2daba7ccd9\x2d029d\x2def4e\x2daa2a\x2d1bccbf0dd243.mount: Deactivated successfully. Aug 13 00:03:48.633606 env[1583]: time="2025-08-13T00:03:48.633567433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gfdrg,Uid:527d02b8-3c8b-4d2b-ac23-d425550b3599,Namespace:kube-system,Attempt:1,}" Aug 13 00:03:48.812810 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Aug 13 00:03:48.813224 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali5db8c1c60f0: link becomes ready Aug 13 00:03:48.813060 systemd-networkd[1759]: cali5db8c1c60f0: Link UP Aug 13 00:03:48.815714 systemd-networkd[1759]: cali5db8c1c60f0: Gained carrier Aug 13 00:03:48.834493 env[1583]: 2025-08-13 00:03:48.701 [INFO][4252] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--a--dd293077f6-k8s-coredns--7c65d6cfc9--gfdrg-eth0 coredns-7c65d6cfc9- kube-system 527d02b8-3c8b-4d2b-ac23-d425550b3599 959 0 2025-08-13 00:03:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510.3.8-a-dd293077f6 coredns-7c65d6cfc9-gfdrg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5db8c1c60f0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="da1159cae28b753b6643381ff6974476a3eb1940e4fd4b3947fc9d632d38df5a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gfdrg" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-coredns--7c65d6cfc9--gfdrg-" Aug 13 00:03:48.834493 env[1583]: 2025-08-13 00:03:48.701 [INFO][4252] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="da1159cae28b753b6643381ff6974476a3eb1940e4fd4b3947fc9d632d38df5a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gfdrg" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-coredns--7c65d6cfc9--gfdrg-eth0" Aug 13 00:03:48.834493 env[1583]: 2025-08-13 00:03:48.740 [INFO][4265] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="da1159cae28b753b6643381ff6974476a3eb1940e4fd4b3947fc9d632d38df5a" HandleID="k8s-pod-network.da1159cae28b753b6643381ff6974476a3eb1940e4fd4b3947fc9d632d38df5a" Workload="ci--3510.3.8--a--dd293077f6-k8s-coredns--7c65d6cfc9--gfdrg-eth0" Aug 13 00:03:48.834493 env[1583]: 2025-08-13 00:03:48.740 [INFO][4265] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="da1159cae28b753b6643381ff6974476a3eb1940e4fd4b3947fc9d632d38df5a" HandleID="k8s-pod-network.da1159cae28b753b6643381ff6974476a3eb1940e4fd4b3947fc9d632d38df5a" Workload="ci--3510.3.8--a--dd293077f6-k8s-coredns--7c65d6cfc9--gfdrg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c30a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510.3.8-a-dd293077f6", "pod":"coredns-7c65d6cfc9-gfdrg", "timestamp":"2025-08-13 00:03:48.74045201 +0000 UTC"}, Hostname:"ci-3510.3.8-a-dd293077f6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:03:48.834493 env[1583]: 2025-08-13 00:03:48.740 [INFO][4265] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:03:48.834493 env[1583]: 2025-08-13 00:03:48.741 [INFO][4265] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:03:48.834493 env[1583]: 2025-08-13 00:03:48.741 [INFO][4265] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-a-dd293077f6' Aug 13 00:03:48.834493 env[1583]: 2025-08-13 00:03:48.751 [INFO][4265] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.da1159cae28b753b6643381ff6974476a3eb1940e4fd4b3947fc9d632d38df5a" host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:48.834493 env[1583]: 2025-08-13 00:03:48.757 [INFO][4265] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:48.834493 env[1583]: 2025-08-13 00:03:48.762 [INFO][4265] ipam/ipam.go 511: Trying affinity for 192.168.55.64/26 host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:48.834493 env[1583]: 2025-08-13 00:03:48.765 [INFO][4265] ipam/ipam.go 158: Attempting to load block cidr=192.168.55.64/26 host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:48.834493 env[1583]: 2025-08-13 00:03:48.767 [INFO][4265] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.55.64/26 host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:48.834493 env[1583]: 2025-08-13 00:03:48.768 [INFO][4265] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.55.64/26 handle="k8s-pod-network.da1159cae28b753b6643381ff6974476a3eb1940e4fd4b3947fc9d632d38df5a" host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:48.834493 env[1583]: 2025-08-13 00:03:48.770 [INFO][4265] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.da1159cae28b753b6643381ff6974476a3eb1940e4fd4b3947fc9d632d38df5a Aug 13 00:03:48.834493 env[1583]: 2025-08-13 00:03:48.780 [INFO][4265] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.55.64/26 handle="k8s-pod-network.da1159cae28b753b6643381ff6974476a3eb1940e4fd4b3947fc9d632d38df5a" host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:48.834493 env[1583]: 2025-08-13 00:03:48.787 [INFO][4265] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.55.66/26] block=192.168.55.64/26 handle="k8s-pod-network.da1159cae28b753b6643381ff6974476a3eb1940e4fd4b3947fc9d632d38df5a" host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:48.834493 env[1583]: 2025-08-13 00:03:48.787 [INFO][4265] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.55.66/26] handle="k8s-pod-network.da1159cae28b753b6643381ff6974476a3eb1940e4fd4b3947fc9d632d38df5a" host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:48.834493 env[1583]: 2025-08-13 00:03:48.787 [INFO][4265] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:03:48.834493 env[1583]: 2025-08-13 00:03:48.787 [INFO][4265] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.55.66/26] IPv6=[] ContainerID="da1159cae28b753b6643381ff6974476a3eb1940e4fd4b3947fc9d632d38df5a" HandleID="k8s-pod-network.da1159cae28b753b6643381ff6974476a3eb1940e4fd4b3947fc9d632d38df5a" Workload="ci--3510.3.8--a--dd293077f6-k8s-coredns--7c65d6cfc9--gfdrg-eth0" Aug 13 00:03:48.835388 env[1583]: 2025-08-13 00:03:48.789 [INFO][4252] cni-plugin/k8s.go 418: Populated endpoint ContainerID="da1159cae28b753b6643381ff6974476a3eb1940e4fd4b3947fc9d632d38df5a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gfdrg" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-coredns--7c65d6cfc9--gfdrg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--a--dd293077f6-k8s-coredns--7c65d6cfc9--gfdrg-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"527d02b8-3c8b-4d2b-ac23-d425550b3599", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 3, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-a-dd293077f6", ContainerID:"", Pod:"coredns-7c65d6cfc9-gfdrg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.55.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5db8c1c60f0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:03:48.835388 env[1583]: 2025-08-13 00:03:48.791 [INFO][4252] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.55.66/32] ContainerID="da1159cae28b753b6643381ff6974476a3eb1940e4fd4b3947fc9d632d38df5a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gfdrg" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-coredns--7c65d6cfc9--gfdrg-eth0" Aug 13 00:03:48.835388 env[1583]: 2025-08-13 00:03:48.791 [INFO][4252] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5db8c1c60f0 ContainerID="da1159cae28b753b6643381ff6974476a3eb1940e4fd4b3947fc9d632d38df5a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gfdrg" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-coredns--7c65d6cfc9--gfdrg-eth0" Aug 13 00:03:48.835388 env[1583]: 2025-08-13 00:03:48.816 [INFO][4252] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="da1159cae28b753b6643381ff6974476a3eb1940e4fd4b3947fc9d632d38df5a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gfdrg" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-coredns--7c65d6cfc9--gfdrg-eth0" Aug 13 00:03:48.835388 env[1583]: 2025-08-13 00:03:48.816 [INFO][4252] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="da1159cae28b753b6643381ff6974476a3eb1940e4fd4b3947fc9d632d38df5a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gfdrg" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-coredns--7c65d6cfc9--gfdrg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--a--dd293077f6-k8s-coredns--7c65d6cfc9--gfdrg-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"527d02b8-3c8b-4d2b-ac23-d425550b3599", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 3, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-a-dd293077f6", ContainerID:"da1159cae28b753b6643381ff6974476a3eb1940e4fd4b3947fc9d632d38df5a", Pod:"coredns-7c65d6cfc9-gfdrg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.55.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5db8c1c60f0", MAC:"26:bc:72:c6:88:ef", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:03:48.835388 env[1583]: 2025-08-13 00:03:48.832 [INFO][4252] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="da1159cae28b753b6643381ff6974476a3eb1940e4fd4b3947fc9d632d38df5a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gfdrg" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-coredns--7c65d6cfc9--gfdrg-eth0" Aug 13 00:03:48.854000 audit[4279]: NETFILTER_CFG table=filter:111 family=2 entries=42 op=nft_register_chain pid=4279 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:03:48.854000 audit[4279]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=22552 a0=3 a1=ffffd10b73c0 a2=0 a3=ffffb8d66fa8 items=0 ppid=3940 pid=4279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:48.854000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:03:48.923768 env[1583]: time="2025-08-13T00:03:48.917100640Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:03:48.923768 env[1583]: time="2025-08-13T00:03:48.917150520Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:03:48.923768 env[1583]: time="2025-08-13T00:03:48.917161680Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:03:48.923768 env[1583]: time="2025-08-13T00:03:48.917299360Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/da1159cae28b753b6643381ff6974476a3eb1940e4fd4b3947fc9d632d38df5a pid=4288 runtime=io.containerd.runc.v2 Aug 13 00:03:48.991138 env[1583]: time="2025-08-13T00:03:48.991097172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gfdrg,Uid:527d02b8-3c8b-4d2b-ac23-d425550b3599,Namespace:kube-system,Attempt:1,} returns sandbox id \"da1159cae28b753b6643381ff6974476a3eb1940e4fd4b3947fc9d632d38df5a\"" Aug 13 00:03:49.008373 env[1583]: time="2025-08-13T00:03:49.008332695Z" level=info msg="CreateContainer within sandbox \"da1159cae28b753b6643381ff6974476a3eb1940e4fd4b3947fc9d632d38df5a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:03:49.036036 env[1583]: time="2025-08-13T00:03:49.035989539Z" level=info msg="CreateContainer within sandbox \"da1159cae28b753b6643381ff6974476a3eb1940e4fd4b3947fc9d632d38df5a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c0d7fdd01249751479d5efc5da9dfa007cd4c0525d5c0cc1b5482fd71a027797\"" Aug 13 00:03:49.038580 env[1583]: time="2025-08-13T00:03:49.038544060Z" level=info msg="StartContainer for \"c0d7fdd01249751479d5efc5da9dfa007cd4c0525d5c0cc1b5482fd71a027797\"" Aug 13 00:03:49.209278 env[1583]: time="2025-08-13T00:03:49.209221968Z" level=info msg="StartContainer for \"c0d7fdd01249751479d5efc5da9dfa007cd4c0525d5c0cc1b5482fd71a027797\" returns successfully" Aug 13 00:03:49.254839 env[1583]: time="2025-08-13T00:03:49.254790455Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:49.261318 env[1583]: time="2025-08-13T00:03:49.261274896Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:49.265413 env[1583]: time="2025-08-13T00:03:49.265382417Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:49.270242 env[1583]: time="2025-08-13T00:03:49.270210417Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:49.270941 env[1583]: time="2025-08-13T00:03:49.270910378Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Aug 13 00:03:49.274390 env[1583]: time="2025-08-13T00:03:49.274332738Z" level=info msg="CreateContainer within sandbox \"6e70c5dabf55669137e587224e6860607b4a49ef980b40a5c1c37367037958a7\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Aug 13 00:03:49.331605 env[1583]: time="2025-08-13T00:03:49.331559827Z" level=info msg="CreateContainer within sandbox \"6e70c5dabf55669137e587224e6860607b4a49ef980b40a5c1c37367037958a7\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"3287e40ff8944bb8b0f4a114ae7ac899d63abe41a7d3bf8fcda2714755e78bfd\"" Aug 13 00:03:49.333704 env[1583]: time="2025-08-13T00:03:49.332648628Z" level=info msg="StartContainer for \"3287e40ff8944bb8b0f4a114ae7ac899d63abe41a7d3bf8fcda2714755e78bfd\"" Aug 13 00:03:49.394735 env[1583]: time="2025-08-13T00:03:49.394684638Z" level=info msg="StartContainer for \"3287e40ff8944bb8b0f4a114ae7ac899d63abe41a7d3bf8fcda2714755e78bfd\" returns successfully" Aug 13 00:03:49.484155 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount527340933.mount: Deactivated successfully. Aug 13 00:03:49.496416 env[1583]: time="2025-08-13T00:03:49.496356974Z" level=info msg="StopPodSandbox for \"4e6c7c3adfa6c38531ec378d3b31a0f6a045dbba261f9b907a45ea913661b797\"" Aug 13 00:03:49.594455 env[1583]: 2025-08-13 00:03:49.559 [INFO][4404] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4e6c7c3adfa6c38531ec378d3b31a0f6a045dbba261f9b907a45ea913661b797" Aug 13 00:03:49.594455 env[1583]: 2025-08-13 00:03:49.559 [INFO][4404] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4e6c7c3adfa6c38531ec378d3b31a0f6a045dbba261f9b907a45ea913661b797" iface="eth0" netns="/var/run/netns/cni-202511ab-1af6-cbc9-7524-bb5edee4655d" Aug 13 00:03:49.594455 env[1583]: 2025-08-13 00:03:49.560 [INFO][4404] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4e6c7c3adfa6c38531ec378d3b31a0f6a045dbba261f9b907a45ea913661b797" iface="eth0" netns="/var/run/netns/cni-202511ab-1af6-cbc9-7524-bb5edee4655d" Aug 13 00:03:49.594455 env[1583]: 2025-08-13 00:03:49.561 [INFO][4404] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4e6c7c3adfa6c38531ec378d3b31a0f6a045dbba261f9b907a45ea913661b797" iface="eth0" netns="/var/run/netns/cni-202511ab-1af6-cbc9-7524-bb5edee4655d" Aug 13 00:03:49.594455 env[1583]: 2025-08-13 00:03:49.561 [INFO][4404] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4e6c7c3adfa6c38531ec378d3b31a0f6a045dbba261f9b907a45ea913661b797" Aug 13 00:03:49.594455 env[1583]: 2025-08-13 00:03:49.561 [INFO][4404] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4e6c7c3adfa6c38531ec378d3b31a0f6a045dbba261f9b907a45ea913661b797" Aug 13 00:03:49.594455 env[1583]: 2025-08-13 00:03:49.580 [INFO][4411] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4e6c7c3adfa6c38531ec378d3b31a0f6a045dbba261f9b907a45ea913661b797" HandleID="k8s-pod-network.4e6c7c3adfa6c38531ec378d3b31a0f6a045dbba261f9b907a45ea913661b797" Workload="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--rqg7k-eth0" Aug 13 00:03:49.594455 env[1583]: 2025-08-13 00:03:49.580 [INFO][4411] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:03:49.594455 env[1583]: 2025-08-13 00:03:49.580 [INFO][4411] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:03:49.594455 env[1583]: 2025-08-13 00:03:49.589 [WARNING][4411] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4e6c7c3adfa6c38531ec378d3b31a0f6a045dbba261f9b907a45ea913661b797" HandleID="k8s-pod-network.4e6c7c3adfa6c38531ec378d3b31a0f6a045dbba261f9b907a45ea913661b797" Workload="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--rqg7k-eth0" Aug 13 00:03:49.594455 env[1583]: 2025-08-13 00:03:49.589 [INFO][4411] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4e6c7c3adfa6c38531ec378d3b31a0f6a045dbba261f9b907a45ea913661b797" HandleID="k8s-pod-network.4e6c7c3adfa6c38531ec378d3b31a0f6a045dbba261f9b907a45ea913661b797" Workload="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--rqg7k-eth0" Aug 13 00:03:49.594455 env[1583]: 2025-08-13 00:03:49.591 [INFO][4411] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:03:49.594455 env[1583]: 2025-08-13 00:03:49.593 [INFO][4404] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4e6c7c3adfa6c38531ec378d3b31a0f6a045dbba261f9b907a45ea913661b797" Aug 13 00:03:49.599111 env[1583]: time="2025-08-13T00:03:49.597534151Z" level=info msg="TearDown network for sandbox \"4e6c7c3adfa6c38531ec378d3b31a0f6a045dbba261f9b907a45ea913661b797\" successfully" Aug 13 00:03:49.599111 env[1583]: time="2025-08-13T00:03:49.597574071Z" level=info msg="StopPodSandbox for \"4e6c7c3adfa6c38531ec378d3b31a0f6a045dbba261f9b907a45ea913661b797\" returns successfully" Aug 13 00:03:49.597197 systemd[1]: run-netns-cni\x2d202511ab\x2d1af6\x2dcbc9\x2d7524\x2dbb5edee4655d.mount: Deactivated successfully. Aug 13 00:03:49.601568 env[1583]: time="2025-08-13T00:03:49.601524591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f97f8f466-rqg7k,Uid:77ef7b45-3d42-4c7d-b3d2-5b91108fefb3,Namespace:calico-apiserver,Attempt:1,}" Aug 13 00:03:49.760978 kubelet[2655]: I0813 00:03:49.760282 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-gfdrg" podStartSLOduration=40.760264337 podStartE2EDuration="40.760264337s" podCreationTimestamp="2025-08-13 00:03:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:03:49.759528097 +0000 UTC m=+47.438426843" watchObservedRunningTime="2025-08-13 00:03:49.760264337 +0000 UTC m=+47.439163043" Aug 13 00:03:49.766990 systemd-networkd[1759]: cali48a074b18cb: Link UP Aug 13 00:03:49.778697 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali48a074b18cb: link becomes ready Aug 13 00:03:49.780968 systemd-networkd[1759]: cali48a074b18cb: Gained carrier Aug 13 00:03:49.810721 env[1583]: 2025-08-13 00:03:49.675 [INFO][4418] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--rqg7k-eth0 calico-apiserver-5f97f8f466- calico-apiserver 77ef7b45-3d42-4c7d-b3d2-5b91108fefb3 971 0 2025-08-13 00:03:20 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5f97f8f466 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510.3.8-a-dd293077f6 calico-apiserver-5f97f8f466-rqg7k eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali48a074b18cb [] [] }} ContainerID="6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a" Namespace="calico-apiserver" Pod="calico-apiserver-5f97f8f466-rqg7k" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--rqg7k-" Aug 13 00:03:49.810721 env[1583]: 2025-08-13 00:03:49.675 [INFO][4418] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a" Namespace="calico-apiserver" Pod="calico-apiserver-5f97f8f466-rqg7k" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--rqg7k-eth0" Aug 13 00:03:49.810721 env[1583]: 2025-08-13 00:03:49.703 [INFO][4431] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a" HandleID="k8s-pod-network.6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a" Workload="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--rqg7k-eth0" Aug 13 00:03:49.810721 env[1583]: 2025-08-13 00:03:49.703 [INFO][4431] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a" HandleID="k8s-pod-network.6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a" Workload="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--rqg7k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3600), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510.3.8-a-dd293077f6", "pod":"calico-apiserver-5f97f8f466-rqg7k", "timestamp":"2025-08-13 00:03:49.703477208 +0000 UTC"}, Hostname:"ci-3510.3.8-a-dd293077f6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:03:49.810721 env[1583]: 2025-08-13 00:03:49.703 [INFO][4431] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:03:49.810721 env[1583]: 2025-08-13 00:03:49.703 [INFO][4431] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:03:49.810721 env[1583]: 2025-08-13 00:03:49.703 [INFO][4431] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-a-dd293077f6' Aug 13 00:03:49.810721 env[1583]: 2025-08-13 00:03:49.714 [INFO][4431] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a" host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:49.810721 env[1583]: 2025-08-13 00:03:49.720 [INFO][4431] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:49.810721 env[1583]: 2025-08-13 00:03:49.727 [INFO][4431] ipam/ipam.go 511: Trying affinity for 192.168.55.64/26 host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:49.810721 env[1583]: 2025-08-13 00:03:49.729 [INFO][4431] ipam/ipam.go 158: Attempting to load block cidr=192.168.55.64/26 host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:49.810721 env[1583]: 2025-08-13 00:03:49.733 [INFO][4431] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.55.64/26 host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:49.810721 env[1583]: 2025-08-13 00:03:49.733 [INFO][4431] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.55.64/26 handle="k8s-pod-network.6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a" host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:49.810721 env[1583]: 2025-08-13 00:03:49.736 [INFO][4431] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a Aug 13 00:03:49.810721 env[1583]: 2025-08-13 00:03:49.745 [INFO][4431] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.55.64/26 handle="k8s-pod-network.6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a" host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:49.810721 env[1583]: 2025-08-13 00:03:49.757 [INFO][4431] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.55.67/26] block=192.168.55.64/26 handle="k8s-pod-network.6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a" host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:49.810721 env[1583]: 2025-08-13 00:03:49.757 [INFO][4431] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.55.67/26] handle="k8s-pod-network.6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a" host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:49.810721 env[1583]: 2025-08-13 00:03:49.757 [INFO][4431] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:03:49.810721 env[1583]: 2025-08-13 00:03:49.757 [INFO][4431] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.55.67/26] IPv6=[] ContainerID="6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a" HandleID="k8s-pod-network.6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a" Workload="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--rqg7k-eth0" Aug 13 00:03:49.811342 env[1583]: 2025-08-13 00:03:49.763 [INFO][4418] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a" Namespace="calico-apiserver" Pod="calico-apiserver-5f97f8f466-rqg7k" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--rqg7k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--rqg7k-eth0", GenerateName:"calico-apiserver-5f97f8f466-", Namespace:"calico-apiserver", SelfLink:"", UID:"77ef7b45-3d42-4c7d-b3d2-5b91108fefb3", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 3, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f97f8f466", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-a-dd293077f6", ContainerID:"", Pod:"calico-apiserver-5f97f8f466-rqg7k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.55.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali48a074b18cb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:03:49.811342 env[1583]: 2025-08-13 00:03:49.763 [INFO][4418] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.55.67/32] ContainerID="6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a" Namespace="calico-apiserver" Pod="calico-apiserver-5f97f8f466-rqg7k" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--rqg7k-eth0" Aug 13 00:03:49.811342 env[1583]: 2025-08-13 00:03:49.763 [INFO][4418] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali48a074b18cb ContainerID="6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a" Namespace="calico-apiserver" Pod="calico-apiserver-5f97f8f466-rqg7k" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--rqg7k-eth0" Aug 13 00:03:49.811342 env[1583]: 2025-08-13 00:03:49.767 [INFO][4418] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a" Namespace="calico-apiserver" Pod="calico-apiserver-5f97f8f466-rqg7k" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--rqg7k-eth0" Aug 13 00:03:49.811342 env[1583]: 2025-08-13 00:03:49.768 [INFO][4418] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a" Namespace="calico-apiserver" Pod="calico-apiserver-5f97f8f466-rqg7k" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--rqg7k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--rqg7k-eth0", GenerateName:"calico-apiserver-5f97f8f466-", Namespace:"calico-apiserver", SelfLink:"", UID:"77ef7b45-3d42-4c7d-b3d2-5b91108fefb3", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 3, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f97f8f466", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-a-dd293077f6", ContainerID:"6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a", Pod:"calico-apiserver-5f97f8f466-rqg7k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.55.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali48a074b18cb", MAC:"b2:37:86:22:31:5f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:03:49.811342 env[1583]: 2025-08-13 00:03:49.803 [INFO][4418] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a" Namespace="calico-apiserver" Pod="calico-apiserver-5f97f8f466-rqg7k" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--rqg7k-eth0" Aug 13 00:03:49.838520 env[1583]: time="2025-08-13T00:03:49.838448470Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:03:49.838855 env[1583]: time="2025-08-13T00:03:49.838818910Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:03:49.838969 env[1583]: time="2025-08-13T00:03:49.838946750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:03:49.839219 env[1583]: time="2025-08-13T00:03:49.839190310Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a pid=4453 runtime=io.containerd.runc.v2 Aug 13 00:03:49.847000 audit[4458]: NETFILTER_CFG table=filter:112 family=2 entries=20 op=nft_register_rule pid=4458 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:03:49.847000 audit[4458]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=fffff4adc9c0 a2=0 a3=1 items=0 ppid=2755 pid=4458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:49.847000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:03:49.849000 audit[4458]: NETFILTER_CFG table=nat:113 family=2 entries=14 op=nft_register_rule pid=4458 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:03:49.849000 audit[4458]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=fffff4adc9c0 a2=0 a3=1 items=0 ppid=2755 pid=4458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:49.849000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:03:49.879735 kernel: kauditd_printk_skb: 559 callbacks suppressed Aug 13 00:03:49.879867 kernel: audit: type=1325 audit(1755043429.863:424): table=filter:114 family=2 entries=16 op=nft_register_rule pid=4478 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:03:49.863000 audit[4478]: NETFILTER_CFG table=filter:114 family=2 entries=16 op=nft_register_rule pid=4478 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:03:49.863000 audit[4478]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=ffffe3796d30 a2=0 a3=1 items=0 ppid=2755 pid=4478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:49.911141 kernel: audit: type=1300 audit(1755043429.863:424): arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=ffffe3796d30 a2=0 a3=1 items=0 ppid=2755 pid=4478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:49.863000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:03:49.924613 kernel: audit: type=1327 audit(1755043429.863:424): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:03:49.926000 audit[4478]: NETFILTER_CFG table=nat:115 family=2 entries=42 op=nft_register_chain pid=4478 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:03:49.926000 audit[4478]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=17772 a0=3 a1=ffffe3796d30 a2=0 a3=1 items=0 ppid=2755 pid=4478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:49.969913 kernel: audit: type=1325 audit(1755043429.926:425): table=nat:115 family=2 entries=42 op=nft_register_chain pid=4478 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:03:49.970037 kernel: audit: type=1300 audit(1755043429.926:425): arch=c00000b7 syscall=211 success=yes exit=17772 a0=3 a1=ffffe3796d30 a2=0 a3=1 items=0 ppid=2755 pid=4478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:49.926000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:03:49.983744 kernel: audit: type=1327 audit(1755043429.926:425): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:03:49.990000 audit[4479]: NETFILTER_CFG table=filter:116 family=2 entries=54 op=nft_register_chain pid=4479 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:03:49.990000 audit[4479]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=29396 a0=3 a1=ffffd042bdf0 a2=0 a3=ffffa8d45fa8 items=0 ppid=3940 pid=4479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:50.007323 env[1583]: time="2025-08-13T00:03:50.007283858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f97f8f466-rqg7k,Uid:77ef7b45-3d42-4c7d-b3d2-5b91108fefb3,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a\"" Aug 13 00:03:50.034946 kernel: audit: type=1325 audit(1755043429.990:426): table=filter:116 family=2 entries=54 op=nft_register_chain pid=4479 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:03:50.035112 kernel: audit: type=1300 audit(1755043429.990:426): arch=c00000b7 syscall=211 success=yes exit=29396 a0=3 a1=ffffd042bdf0 a2=0 a3=ffffa8d45fa8 items=0 ppid=3940 pid=4479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:50.037429 env[1583]: time="2025-08-13T00:03:50.037372062Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 00:03:49.990000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:03:50.053962 kernel: audit: type=1327 audit(1755043429.990:426): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:03:50.158042 systemd-networkd[1759]: cali5db8c1c60f0: Gained IPv6LL Aug 13 00:03:50.493833 env[1583]: time="2025-08-13T00:03:50.493787096Z" level=info msg="StopPodSandbox for \"edd091152171f7b069755787a194632b052918e3d75b084cdb4e231d1ed5c062\"" Aug 13 00:03:50.547448 kubelet[2655]: I0813 00:03:50.546980 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7d66479978-gz5mz" podStartSLOduration=3.124791689 podStartE2EDuration="6.546961024s" podCreationTimestamp="2025-08-13 00:03:44 +0000 UTC" firstStartedPulling="2025-08-13 00:03:45.849998803 +0000 UTC m=+43.528897549" lastFinishedPulling="2025-08-13 00:03:49.272168138 +0000 UTC m=+46.951066884" observedRunningTime="2025-08-13 00:03:49.812721466 +0000 UTC m=+47.491620212" watchObservedRunningTime="2025-08-13 00:03:50.546961024 +0000 UTC m=+48.225859730" Aug 13 00:03:50.582328 env[1583]: 2025-08-13 00:03:50.547 [INFO][4505] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="edd091152171f7b069755787a194632b052918e3d75b084cdb4e231d1ed5c062" Aug 13 00:03:50.582328 env[1583]: 2025-08-13 00:03:50.547 [INFO][4505] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="edd091152171f7b069755787a194632b052918e3d75b084cdb4e231d1ed5c062" iface="eth0" netns="/var/run/netns/cni-a3fc03de-0e4c-87f7-2de8-ceba2a5a816b" Aug 13 00:03:50.582328 env[1583]: 2025-08-13 00:03:50.547 [INFO][4505] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="edd091152171f7b069755787a194632b052918e3d75b084cdb4e231d1ed5c062" iface="eth0" netns="/var/run/netns/cni-a3fc03de-0e4c-87f7-2de8-ceba2a5a816b" Aug 13 00:03:50.582328 env[1583]: 2025-08-13 00:03:50.548 [INFO][4505] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="edd091152171f7b069755787a194632b052918e3d75b084cdb4e231d1ed5c062" iface="eth0" netns="/var/run/netns/cni-a3fc03de-0e4c-87f7-2de8-ceba2a5a816b" Aug 13 00:03:50.582328 env[1583]: 2025-08-13 00:03:50.548 [INFO][4505] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="edd091152171f7b069755787a194632b052918e3d75b084cdb4e231d1ed5c062" Aug 13 00:03:50.582328 env[1583]: 2025-08-13 00:03:50.548 [INFO][4505] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="edd091152171f7b069755787a194632b052918e3d75b084cdb4e231d1ed5c062" Aug 13 00:03:50.582328 env[1583]: 2025-08-13 00:03:50.568 [INFO][4513] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="edd091152171f7b069755787a194632b052918e3d75b084cdb4e231d1ed5c062" HandleID="k8s-pod-network.edd091152171f7b069755787a194632b052918e3d75b084cdb4e231d1ed5c062" Workload="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5664c8f75b--pc5h2-eth0" Aug 13 00:03:50.582328 env[1583]: 2025-08-13 00:03:50.568 [INFO][4513] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:03:50.582328 env[1583]: 2025-08-13 00:03:50.568 [INFO][4513] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:03:50.582328 env[1583]: 2025-08-13 00:03:50.577 [WARNING][4513] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="edd091152171f7b069755787a194632b052918e3d75b084cdb4e231d1ed5c062" HandleID="k8s-pod-network.edd091152171f7b069755787a194632b052918e3d75b084cdb4e231d1ed5c062" Workload="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5664c8f75b--pc5h2-eth0" Aug 13 00:03:50.582328 env[1583]: 2025-08-13 00:03:50.577 [INFO][4513] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="edd091152171f7b069755787a194632b052918e3d75b084cdb4e231d1ed5c062" HandleID="k8s-pod-network.edd091152171f7b069755787a194632b052918e3d75b084cdb4e231d1ed5c062" Workload="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5664c8f75b--pc5h2-eth0" Aug 13 00:03:50.582328 env[1583]: 2025-08-13 00:03:50.579 [INFO][4513] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:03:50.582328 env[1583]: 2025-08-13 00:03:50.580 [INFO][4505] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="edd091152171f7b069755787a194632b052918e3d75b084cdb4e231d1ed5c062" Aug 13 00:03:50.585613 systemd[1]: run-netns-cni\x2da3fc03de\x2d0e4c\x2d87f7\x2d2de8\x2dceba2a5a816b.mount: Deactivated successfully. Aug 13 00:03:50.586875 env[1583]: time="2025-08-13T00:03:50.586800631Z" level=info msg="TearDown network for sandbox \"edd091152171f7b069755787a194632b052918e3d75b084cdb4e231d1ed5c062\" successfully" Aug 13 00:03:50.586875 env[1583]: time="2025-08-13T00:03:50.586852071Z" level=info msg="StopPodSandbox for \"edd091152171f7b069755787a194632b052918e3d75b084cdb4e231d1ed5c062\" returns successfully" Aug 13 00:03:50.587859 env[1583]: time="2025-08-13T00:03:50.587826511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5664c8f75b-pc5h2,Uid:28ea706e-5d40-433e-9ee6-62a5f96b1be1,Namespace:calico-apiserver,Attempt:1,}" Aug 13 00:03:50.743081 systemd-networkd[1759]: cali10cd96087d8: Link UP Aug 13 00:03:50.751720 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali10cd96087d8: link becomes ready Aug 13 00:03:50.752638 systemd-networkd[1759]: cali10cd96087d8: Gained carrier Aug 13 00:03:50.772581 env[1583]: 2025-08-13 00:03:50.652 [INFO][4520] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5664c8f75b--pc5h2-eth0 calico-apiserver-5664c8f75b- calico-apiserver 28ea706e-5d40-433e-9ee6-62a5f96b1be1 993 0 2025-08-13 00:03:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5664c8f75b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510.3.8-a-dd293077f6 calico-apiserver-5664c8f75b-pc5h2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali10cd96087d8 [] [] }} ContainerID="22bfa0c9cdb21b36f93f79caefea5b39bd337d232a1ed708983c09ce93ce82c7" Namespace="calico-apiserver" Pod="calico-apiserver-5664c8f75b-pc5h2" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5664c8f75b--pc5h2-" Aug 13 00:03:50.772581 env[1583]: 2025-08-13 00:03:50.652 [INFO][4520] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="22bfa0c9cdb21b36f93f79caefea5b39bd337d232a1ed708983c09ce93ce82c7" Namespace="calico-apiserver" Pod="calico-apiserver-5664c8f75b-pc5h2" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5664c8f75b--pc5h2-eth0" Aug 13 00:03:50.772581 env[1583]: 2025-08-13 00:03:50.682 [INFO][4532] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="22bfa0c9cdb21b36f93f79caefea5b39bd337d232a1ed708983c09ce93ce82c7" HandleID="k8s-pod-network.22bfa0c9cdb21b36f93f79caefea5b39bd337d232a1ed708983c09ce93ce82c7" Workload="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5664c8f75b--pc5h2-eth0" Aug 13 00:03:50.772581 env[1583]: 2025-08-13 00:03:50.682 [INFO][4532] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="22bfa0c9cdb21b36f93f79caefea5b39bd337d232a1ed708983c09ce93ce82c7" HandleID="k8s-pod-network.22bfa0c9cdb21b36f93f79caefea5b39bd337d232a1ed708983c09ce93ce82c7" Workload="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5664c8f75b--pc5h2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3020), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510.3.8-a-dd293077f6", "pod":"calico-apiserver-5664c8f75b-pc5h2", "timestamp":"2025-08-13 00:03:50.682681646 +0000 UTC"}, Hostname:"ci-3510.3.8-a-dd293077f6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:03:50.772581 env[1583]: 2025-08-13 00:03:50.682 [INFO][4532] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:03:50.772581 env[1583]: 2025-08-13 00:03:50.682 [INFO][4532] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:03:50.772581 env[1583]: 2025-08-13 00:03:50.683 [INFO][4532] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-a-dd293077f6' Aug 13 00:03:50.772581 env[1583]: 2025-08-13 00:03:50.696 [INFO][4532] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.22bfa0c9cdb21b36f93f79caefea5b39bd337d232a1ed708983c09ce93ce82c7" host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:50.772581 env[1583]: 2025-08-13 00:03:50.703 [INFO][4532] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:50.772581 env[1583]: 2025-08-13 00:03:50.710 [INFO][4532] ipam/ipam.go 511: Trying affinity for 192.168.55.64/26 host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:50.772581 env[1583]: 2025-08-13 00:03:50.715 [INFO][4532] ipam/ipam.go 158: Attempting to load block cidr=192.168.55.64/26 host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:50.772581 env[1583]: 2025-08-13 00:03:50.719 [INFO][4532] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.55.64/26 host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:50.772581 env[1583]: 2025-08-13 00:03:50.719 [INFO][4532] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.55.64/26 handle="k8s-pod-network.22bfa0c9cdb21b36f93f79caefea5b39bd337d232a1ed708983c09ce93ce82c7" host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:50.772581 env[1583]: 2025-08-13 00:03:50.721 [INFO][4532] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.22bfa0c9cdb21b36f93f79caefea5b39bd337d232a1ed708983c09ce93ce82c7 Aug 13 00:03:50.772581 env[1583]: 2025-08-13 00:03:50.727 [INFO][4532] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.55.64/26 handle="k8s-pod-network.22bfa0c9cdb21b36f93f79caefea5b39bd337d232a1ed708983c09ce93ce82c7" host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:50.772581 env[1583]: 2025-08-13 00:03:50.737 [INFO][4532] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.55.68/26] block=192.168.55.64/26 handle="k8s-pod-network.22bfa0c9cdb21b36f93f79caefea5b39bd337d232a1ed708983c09ce93ce82c7" host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:50.772581 env[1583]: 2025-08-13 00:03:50.737 [INFO][4532] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.55.68/26] handle="k8s-pod-network.22bfa0c9cdb21b36f93f79caefea5b39bd337d232a1ed708983c09ce93ce82c7" host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:50.772581 env[1583]: 2025-08-13 00:03:50.738 [INFO][4532] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:03:50.772581 env[1583]: 2025-08-13 00:03:50.738 [INFO][4532] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.55.68/26] IPv6=[] ContainerID="22bfa0c9cdb21b36f93f79caefea5b39bd337d232a1ed708983c09ce93ce82c7" HandleID="k8s-pod-network.22bfa0c9cdb21b36f93f79caefea5b39bd337d232a1ed708983c09ce93ce82c7" Workload="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5664c8f75b--pc5h2-eth0" Aug 13 00:03:50.773205 env[1583]: 2025-08-13 00:03:50.739 [INFO][4520] cni-plugin/k8s.go 418: Populated endpoint ContainerID="22bfa0c9cdb21b36f93f79caefea5b39bd337d232a1ed708983c09ce93ce82c7" Namespace="calico-apiserver" Pod="calico-apiserver-5664c8f75b-pc5h2" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5664c8f75b--pc5h2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5664c8f75b--pc5h2-eth0", GenerateName:"calico-apiserver-5664c8f75b-", Namespace:"calico-apiserver", SelfLink:"", UID:"28ea706e-5d40-433e-9ee6-62a5f96b1be1", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 3, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5664c8f75b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-a-dd293077f6", ContainerID:"", Pod:"calico-apiserver-5664c8f75b-pc5h2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.55.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali10cd96087d8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:03:50.773205 env[1583]: 2025-08-13 00:03:50.739 [INFO][4520] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.55.68/32] ContainerID="22bfa0c9cdb21b36f93f79caefea5b39bd337d232a1ed708983c09ce93ce82c7" Namespace="calico-apiserver" Pod="calico-apiserver-5664c8f75b-pc5h2" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5664c8f75b--pc5h2-eth0" Aug 13 00:03:50.773205 env[1583]: 2025-08-13 00:03:50.740 [INFO][4520] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali10cd96087d8 ContainerID="22bfa0c9cdb21b36f93f79caefea5b39bd337d232a1ed708983c09ce93ce82c7" Namespace="calico-apiserver" Pod="calico-apiserver-5664c8f75b-pc5h2" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5664c8f75b--pc5h2-eth0" Aug 13 00:03:50.773205 env[1583]: 2025-08-13 00:03:50.743 [INFO][4520] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="22bfa0c9cdb21b36f93f79caefea5b39bd337d232a1ed708983c09ce93ce82c7" Namespace="calico-apiserver" Pod="calico-apiserver-5664c8f75b-pc5h2" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5664c8f75b--pc5h2-eth0" Aug 13 00:03:50.773205 env[1583]: 2025-08-13 00:03:50.756 [INFO][4520] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="22bfa0c9cdb21b36f93f79caefea5b39bd337d232a1ed708983c09ce93ce82c7" Namespace="calico-apiserver" Pod="calico-apiserver-5664c8f75b-pc5h2" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5664c8f75b--pc5h2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5664c8f75b--pc5h2-eth0", GenerateName:"calico-apiserver-5664c8f75b-", Namespace:"calico-apiserver", SelfLink:"", UID:"28ea706e-5d40-433e-9ee6-62a5f96b1be1", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 3, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5664c8f75b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-a-dd293077f6", ContainerID:"22bfa0c9cdb21b36f93f79caefea5b39bd337d232a1ed708983c09ce93ce82c7", Pod:"calico-apiserver-5664c8f75b-pc5h2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.55.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali10cd96087d8", MAC:"ee:d5:e7:db:8e:92", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:03:50.773205 env[1583]: 2025-08-13 00:03:50.771 [INFO][4520] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="22bfa0c9cdb21b36f93f79caefea5b39bd337d232a1ed708983c09ce93ce82c7" Namespace="calico-apiserver" Pod="calico-apiserver-5664c8f75b-pc5h2" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5664c8f75b--pc5h2-eth0" Aug 13 00:03:50.784000 audit[4547]: NETFILTER_CFG table=filter:117 family=2 entries=45 op=nft_register_chain pid=4547 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:03:50.784000 audit[4547]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24264 a0=3 a1=fffff7882e90 a2=0 a3=ffff962e4fa8 items=0 ppid=3940 pid=4547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:50.784000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:03:50.801677 kernel: audit: type=1325 audit(1755043430.784:427): table=filter:117 family=2 entries=45 op=nft_register_chain pid=4547 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:03:50.804746 env[1583]: time="2025-08-13T00:03:50.804647986Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:03:50.804746 env[1583]: time="2025-08-13T00:03:50.804706746Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:03:50.804746 env[1583]: time="2025-08-13T00:03:50.804724826Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:03:50.805405 env[1583]: time="2025-08-13T00:03:50.805221506Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/22bfa0c9cdb21b36f93f79caefea5b39bd337d232a1ed708983c09ce93ce82c7 pid=4556 runtime=io.containerd.runc.v2 Aug 13 00:03:50.865431 env[1583]: time="2025-08-13T00:03:50.865379795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5664c8f75b-pc5h2,Uid:28ea706e-5d40-433e-9ee6-62a5f96b1be1,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"22bfa0c9cdb21b36f93f79caefea5b39bd337d232a1ed708983c09ce93ce82c7\"" Aug 13 00:03:51.309925 systemd-networkd[1759]: cali48a074b18cb: Gained IPv6LL Aug 13 00:03:51.491873 env[1583]: time="2025-08-13T00:03:51.491745175Z" level=info msg="StopPodSandbox for \"838fffdc175cdb2fd82e62d47433c5e5f3e885d3d9b33b5ebdd7fec9c479d8f0\"" Aug 13 00:03:51.506007 env[1583]: time="2025-08-13T00:03:51.505955857Z" level=info msg="StopPodSandbox for \"28c6889dec5f17c11b47d3ed693d9afe879801706eb6a659afadea9406346a00\"" Aug 13 00:03:51.649626 env[1583]: 2025-08-13 00:03:51.593 [INFO][4621] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="28c6889dec5f17c11b47d3ed693d9afe879801706eb6a659afadea9406346a00" Aug 13 00:03:51.649626 env[1583]: 2025-08-13 00:03:51.593 [INFO][4621] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="28c6889dec5f17c11b47d3ed693d9afe879801706eb6a659afadea9406346a00" iface="eth0" netns="/var/run/netns/cni-56b6fdf6-41b6-7c14-4e83-211a48e214bf" Aug 13 00:03:51.649626 env[1583]: 2025-08-13 00:03:51.593 [INFO][4621] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="28c6889dec5f17c11b47d3ed693d9afe879801706eb6a659afadea9406346a00" iface="eth0" netns="/var/run/netns/cni-56b6fdf6-41b6-7c14-4e83-211a48e214bf" Aug 13 00:03:51.649626 env[1583]: 2025-08-13 00:03:51.593 [INFO][4621] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="28c6889dec5f17c11b47d3ed693d9afe879801706eb6a659afadea9406346a00" iface="eth0" netns="/var/run/netns/cni-56b6fdf6-41b6-7c14-4e83-211a48e214bf" Aug 13 00:03:51.649626 env[1583]: 2025-08-13 00:03:51.593 [INFO][4621] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="28c6889dec5f17c11b47d3ed693d9afe879801706eb6a659afadea9406346a00" Aug 13 00:03:51.649626 env[1583]: 2025-08-13 00:03:51.593 [INFO][4621] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="28c6889dec5f17c11b47d3ed693d9afe879801706eb6a659afadea9406346a00" Aug 13 00:03:51.649626 env[1583]: 2025-08-13 00:03:51.629 [INFO][4631] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="28c6889dec5f17c11b47d3ed693d9afe879801706eb6a659afadea9406346a00" HandleID="k8s-pod-network.28c6889dec5f17c11b47d3ed693d9afe879801706eb6a659afadea9406346a00" Workload="ci--3510.3.8--a--dd293077f6-k8s-coredns--7c65d6cfc9--ktbwq-eth0" Aug 13 00:03:51.649626 env[1583]: 2025-08-13 00:03:51.629 [INFO][4631] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:03:51.649626 env[1583]: 2025-08-13 00:03:51.629 [INFO][4631] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:03:51.649626 env[1583]: 2025-08-13 00:03:51.639 [WARNING][4631] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="28c6889dec5f17c11b47d3ed693d9afe879801706eb6a659afadea9406346a00" HandleID="k8s-pod-network.28c6889dec5f17c11b47d3ed693d9afe879801706eb6a659afadea9406346a00" Workload="ci--3510.3.8--a--dd293077f6-k8s-coredns--7c65d6cfc9--ktbwq-eth0" Aug 13 00:03:51.649626 env[1583]: 2025-08-13 00:03:51.639 [INFO][4631] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="28c6889dec5f17c11b47d3ed693d9afe879801706eb6a659afadea9406346a00" HandleID="k8s-pod-network.28c6889dec5f17c11b47d3ed693d9afe879801706eb6a659afadea9406346a00" Workload="ci--3510.3.8--a--dd293077f6-k8s-coredns--7c65d6cfc9--ktbwq-eth0" Aug 13 00:03:51.649626 env[1583]: 2025-08-13 00:03:51.641 [INFO][4631] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:03:51.649626 env[1583]: 2025-08-13 00:03:51.647 [INFO][4621] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="28c6889dec5f17c11b47d3ed693d9afe879801706eb6a659afadea9406346a00" Aug 13 00:03:51.656502 env[1583]: time="2025-08-13T00:03:51.652407360Z" level=info msg="TearDown network for sandbox \"28c6889dec5f17c11b47d3ed693d9afe879801706eb6a659afadea9406346a00\" successfully" Aug 13 00:03:51.656502 env[1583]: time="2025-08-13T00:03:51.652454840Z" level=info msg="StopPodSandbox for \"28c6889dec5f17c11b47d3ed693d9afe879801706eb6a659afadea9406346a00\" returns successfully" Aug 13 00:03:51.655423 systemd[1]: run-netns-cni\x2d56b6fdf6\x2d41b6\x2d7c14\x2d4e83\x2d211a48e214bf.mount: Deactivated successfully. Aug 13 00:03:51.659238 env[1583]: time="2025-08-13T00:03:51.659194441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-ktbwq,Uid:3816201b-b93a-4ec2-a67a-d16b5eed4f52,Namespace:kube-system,Attempt:1,}" Aug 13 00:03:51.662091 env[1583]: 2025-08-13 00:03:51.580 [INFO][4611] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="838fffdc175cdb2fd82e62d47433c5e5f3e885d3d9b33b5ebdd7fec9c479d8f0" Aug 13 00:03:51.662091 env[1583]: 2025-08-13 00:03:51.581 [INFO][4611] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="838fffdc175cdb2fd82e62d47433c5e5f3e885d3d9b33b5ebdd7fec9c479d8f0" iface="eth0" netns="/var/run/netns/cni-fc1dcb6d-4363-f445-507d-f974d6750562" Aug 13 00:03:51.662091 env[1583]: 2025-08-13 00:03:51.581 [INFO][4611] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="838fffdc175cdb2fd82e62d47433c5e5f3e885d3d9b33b5ebdd7fec9c479d8f0" iface="eth0" netns="/var/run/netns/cni-fc1dcb6d-4363-f445-507d-f974d6750562" Aug 13 00:03:51.662091 env[1583]: 2025-08-13 00:03:51.588 [INFO][4611] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="838fffdc175cdb2fd82e62d47433c5e5f3e885d3d9b33b5ebdd7fec9c479d8f0" iface="eth0" netns="/var/run/netns/cni-fc1dcb6d-4363-f445-507d-f974d6750562" Aug 13 00:03:51.662091 env[1583]: 2025-08-13 00:03:51.588 [INFO][4611] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="838fffdc175cdb2fd82e62d47433c5e5f3e885d3d9b33b5ebdd7fec9c479d8f0" Aug 13 00:03:51.662091 env[1583]: 2025-08-13 00:03:51.588 [INFO][4611] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="838fffdc175cdb2fd82e62d47433c5e5f3e885d3d9b33b5ebdd7fec9c479d8f0" Aug 13 00:03:51.662091 env[1583]: 2025-08-13 00:03:51.640 [INFO][4629] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="838fffdc175cdb2fd82e62d47433c5e5f3e885d3d9b33b5ebdd7fec9c479d8f0" HandleID="k8s-pod-network.838fffdc175cdb2fd82e62d47433c5e5f3e885d3d9b33b5ebdd7fec9c479d8f0" Workload="ci--3510.3.8--a--dd293077f6-k8s-csi--node--driver--dc7fc-eth0" Aug 13 00:03:51.662091 env[1583]: 2025-08-13 00:03:51.640 [INFO][4629] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:03:51.662091 env[1583]: 2025-08-13 00:03:51.641 [INFO][4629] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:03:51.662091 env[1583]: 2025-08-13 00:03:51.655 [WARNING][4629] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="838fffdc175cdb2fd82e62d47433c5e5f3e885d3d9b33b5ebdd7fec9c479d8f0" HandleID="k8s-pod-network.838fffdc175cdb2fd82e62d47433c5e5f3e885d3d9b33b5ebdd7fec9c479d8f0" Workload="ci--3510.3.8--a--dd293077f6-k8s-csi--node--driver--dc7fc-eth0" Aug 13 00:03:51.662091 env[1583]: 2025-08-13 00:03:51.655 [INFO][4629] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="838fffdc175cdb2fd82e62d47433c5e5f3e885d3d9b33b5ebdd7fec9c479d8f0" HandleID="k8s-pod-network.838fffdc175cdb2fd82e62d47433c5e5f3e885d3d9b33b5ebdd7fec9c479d8f0" Workload="ci--3510.3.8--a--dd293077f6-k8s-csi--node--driver--dc7fc-eth0" Aug 13 00:03:51.662091 env[1583]: 2025-08-13 00:03:51.657 [INFO][4629] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:03:51.662091 env[1583]: 2025-08-13 00:03:51.660 [INFO][4611] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="838fffdc175cdb2fd82e62d47433c5e5f3e885d3d9b33b5ebdd7fec9c479d8f0" Aug 13 00:03:51.665191 systemd[1]: run-netns-cni\x2dfc1dcb6d\x2d4363\x2df445\x2d507d\x2df974d6750562.mount: Deactivated successfully. Aug 13 00:03:51.666889 env[1583]: time="2025-08-13T00:03:51.665381682Z" level=info msg="TearDown network for sandbox \"838fffdc175cdb2fd82e62d47433c5e5f3e885d3d9b33b5ebdd7fec9c479d8f0\" successfully" Aug 13 00:03:51.666889 env[1583]: time="2025-08-13T00:03:51.665418522Z" level=info msg="StopPodSandbox for \"838fffdc175cdb2fd82e62d47433c5e5f3e885d3d9b33b5ebdd7fec9c479d8f0\" returns successfully" Aug 13 00:03:51.679393 env[1583]: time="2025-08-13T00:03:51.679331804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dc7fc,Uid:8e788131-5ffd-4005-9137-e23c17af1da5,Namespace:calico-system,Attempt:1,}" Aug 13 00:03:51.823684 systemd-networkd[1759]: cali10cd96087d8: Gained IPv6LL Aug 13 00:03:51.911921 systemd-networkd[1759]: calif97246af908: Link UP Aug 13 00:03:51.930996 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Aug 13 00:03:51.931122 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calif97246af908: link becomes ready Aug 13 00:03:51.937744 systemd-networkd[1759]: calif97246af908: Gained carrier Aug 13 00:03:51.968900 env[1583]: 2025-08-13 00:03:51.759 [INFO][4643] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--a--dd293077f6-k8s-coredns--7c65d6cfc9--ktbwq-eth0 coredns-7c65d6cfc9- kube-system 3816201b-b93a-4ec2-a67a-d16b5eed4f52 1003 0 2025-08-13 00:03:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510.3.8-a-dd293077f6 coredns-7c65d6cfc9-ktbwq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif97246af908 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="6c7a15c00d4b5b9367baf3a3abcc3ef71e3b826b78a90e7c716b25cd32c4c3e8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ktbwq" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-coredns--7c65d6cfc9--ktbwq-" Aug 13 00:03:51.968900 env[1583]: 2025-08-13 00:03:51.759 [INFO][4643] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6c7a15c00d4b5b9367baf3a3abcc3ef71e3b826b78a90e7c716b25cd32c4c3e8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ktbwq" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-coredns--7c65d6cfc9--ktbwq-eth0" Aug 13 00:03:51.968900 env[1583]: 2025-08-13 00:03:51.807 [INFO][4668] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6c7a15c00d4b5b9367baf3a3abcc3ef71e3b826b78a90e7c716b25cd32c4c3e8" HandleID="k8s-pod-network.6c7a15c00d4b5b9367baf3a3abcc3ef71e3b826b78a90e7c716b25cd32c4c3e8" Workload="ci--3510.3.8--a--dd293077f6-k8s-coredns--7c65d6cfc9--ktbwq-eth0" Aug 13 00:03:51.968900 env[1583]: 2025-08-13 00:03:51.807 [INFO][4668] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6c7a15c00d4b5b9367baf3a3abcc3ef71e3b826b78a90e7c716b25cd32c4c3e8" HandleID="k8s-pod-network.6c7a15c00d4b5b9367baf3a3abcc3ef71e3b826b78a90e7c716b25cd32c4c3e8" Workload="ci--3510.3.8--a--dd293077f6-k8s-coredns--7c65d6cfc9--ktbwq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3600), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510.3.8-a-dd293077f6", "pod":"coredns-7c65d6cfc9-ktbwq", "timestamp":"2025-08-13 00:03:51.807335505 +0000 UTC"}, Hostname:"ci-3510.3.8-a-dd293077f6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:03:51.968900 env[1583]: 2025-08-13 00:03:51.807 [INFO][4668] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:03:51.968900 env[1583]: 2025-08-13 00:03:51.807 [INFO][4668] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:03:51.968900 env[1583]: 2025-08-13 00:03:51.807 [INFO][4668] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-a-dd293077f6' Aug 13 00:03:51.968900 env[1583]: 2025-08-13 00:03:51.819 [INFO][4668] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6c7a15c00d4b5b9367baf3a3abcc3ef71e3b826b78a90e7c716b25cd32c4c3e8" host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:51.968900 env[1583]: 2025-08-13 00:03:51.826 [INFO][4668] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:51.968900 env[1583]: 2025-08-13 00:03:51.834 [INFO][4668] ipam/ipam.go 511: Trying affinity for 192.168.55.64/26 host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:51.968900 env[1583]: 2025-08-13 00:03:51.842 [INFO][4668] ipam/ipam.go 158: Attempting to load block cidr=192.168.55.64/26 host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:51.968900 env[1583]: 2025-08-13 00:03:51.848 [INFO][4668] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.55.64/26 host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:51.968900 env[1583]: 2025-08-13 00:03:51.848 [INFO][4668] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.55.64/26 handle="k8s-pod-network.6c7a15c00d4b5b9367baf3a3abcc3ef71e3b826b78a90e7c716b25cd32c4c3e8" host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:51.968900 env[1583]: 2025-08-13 00:03:51.854 [INFO][4668] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6c7a15c00d4b5b9367baf3a3abcc3ef71e3b826b78a90e7c716b25cd32c4c3e8 Aug 13 00:03:51.968900 env[1583]: 2025-08-13 00:03:51.865 [INFO][4668] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.55.64/26 handle="k8s-pod-network.6c7a15c00d4b5b9367baf3a3abcc3ef71e3b826b78a90e7c716b25cd32c4c3e8" host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:51.968900 env[1583]: 2025-08-13 00:03:51.881 [INFO][4668] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.55.69/26] block=192.168.55.64/26 handle="k8s-pod-network.6c7a15c00d4b5b9367baf3a3abcc3ef71e3b826b78a90e7c716b25cd32c4c3e8" host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:51.968900 env[1583]: 2025-08-13 00:03:51.881 [INFO][4668] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.55.69/26] handle="k8s-pod-network.6c7a15c00d4b5b9367baf3a3abcc3ef71e3b826b78a90e7c716b25cd32c4c3e8" host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:51.968900 env[1583]: 2025-08-13 00:03:51.881 [INFO][4668] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:03:51.968900 env[1583]: 2025-08-13 00:03:51.881 [INFO][4668] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.55.69/26] IPv6=[] ContainerID="6c7a15c00d4b5b9367baf3a3abcc3ef71e3b826b78a90e7c716b25cd32c4c3e8" HandleID="k8s-pod-network.6c7a15c00d4b5b9367baf3a3abcc3ef71e3b826b78a90e7c716b25cd32c4c3e8" Workload="ci--3510.3.8--a--dd293077f6-k8s-coredns--7c65d6cfc9--ktbwq-eth0" Aug 13 00:03:51.969499 env[1583]: 2025-08-13 00:03:51.886 [INFO][4643] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6c7a15c00d4b5b9367baf3a3abcc3ef71e3b826b78a90e7c716b25cd32c4c3e8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ktbwq" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-coredns--7c65d6cfc9--ktbwq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--a--dd293077f6-k8s-coredns--7c65d6cfc9--ktbwq-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"3816201b-b93a-4ec2-a67a-d16b5eed4f52", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 3, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-a-dd293077f6", ContainerID:"", Pod:"coredns-7c65d6cfc9-ktbwq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.55.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif97246af908", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:03:51.969499 env[1583]: 2025-08-13 00:03:51.887 [INFO][4643] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.55.69/32] ContainerID="6c7a15c00d4b5b9367baf3a3abcc3ef71e3b826b78a90e7c716b25cd32c4c3e8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ktbwq" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-coredns--7c65d6cfc9--ktbwq-eth0" Aug 13 00:03:51.969499 env[1583]: 2025-08-13 00:03:51.887 [INFO][4643] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif97246af908 ContainerID="6c7a15c00d4b5b9367baf3a3abcc3ef71e3b826b78a90e7c716b25cd32c4c3e8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ktbwq" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-coredns--7c65d6cfc9--ktbwq-eth0" Aug 13 00:03:51.969499 env[1583]: 2025-08-13 00:03:51.939 [INFO][4643] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6c7a15c00d4b5b9367baf3a3abcc3ef71e3b826b78a90e7c716b25cd32c4c3e8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ktbwq" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-coredns--7c65d6cfc9--ktbwq-eth0" Aug 13 00:03:51.969499 env[1583]: 2025-08-13 00:03:51.940 [INFO][4643] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6c7a15c00d4b5b9367baf3a3abcc3ef71e3b826b78a90e7c716b25cd32c4c3e8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ktbwq" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-coredns--7c65d6cfc9--ktbwq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--a--dd293077f6-k8s-coredns--7c65d6cfc9--ktbwq-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"3816201b-b93a-4ec2-a67a-d16b5eed4f52", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 3, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-a-dd293077f6", ContainerID:"6c7a15c00d4b5b9367baf3a3abcc3ef71e3b826b78a90e7c716b25cd32c4c3e8", Pod:"coredns-7c65d6cfc9-ktbwq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.55.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif97246af908", MAC:"42:00:b0:69:6d:bf", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:03:51.969499 env[1583]: 2025-08-13 00:03:51.961 [INFO][4643] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6c7a15c00d4b5b9367baf3a3abcc3ef71e3b826b78a90e7c716b25cd32c4c3e8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ktbwq" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-coredns--7c65d6cfc9--ktbwq-eth0" Aug 13 00:03:52.004471 systemd-networkd[1759]: calie8e42674a46: Link UP Aug 13 00:03:51.999000 audit[4693]: NETFILTER_CFG table=filter:118 family=2 entries=44 op=nft_register_chain pid=4693 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:03:51.999000 audit[4693]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=21532 a0=3 a1=ffffefc81b10 a2=0 a3=ffffb9132fa8 items=0 ppid=3940 pid=4693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:51.999000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:03:52.014173 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calie8e42674a46: link becomes ready Aug 13 00:03:52.014250 systemd-networkd[1759]: calie8e42674a46: Gained carrier Aug 13 00:03:52.036832 env[1583]: time="2025-08-13T00:03:52.036761581Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:03:52.037899 env[1583]: time="2025-08-13T00:03:52.037857661Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:03:52.038071 env[1583]: time="2025-08-13T00:03:52.038044981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:03:52.038996 env[1583]: time="2025-08-13T00:03:52.038938581Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6c7a15c00d4b5b9367baf3a3abcc3ef71e3b826b78a90e7c716b25cd32c4c3e8 pid=4702 runtime=io.containerd.runc.v2 Aug 13 00:03:52.049725 env[1583]: 2025-08-13 00:03:51.786 [INFO][4655] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--a--dd293077f6-k8s-csi--node--driver--dc7fc-eth0 csi-node-driver- calico-system 8e788131-5ffd-4005-9137-e23c17af1da5 1002 0 2025-08-13 00:03:26 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-3510.3.8-a-dd293077f6 csi-node-driver-dc7fc eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie8e42674a46 [] [] }} ContainerID="f73796d03dac922376871a21cab0ae8506d92dfe645e92ba96f1b8310878edf0" Namespace="calico-system" Pod="csi-node-driver-dc7fc" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-csi--node--driver--dc7fc-" Aug 13 00:03:52.049725 env[1583]: 2025-08-13 00:03:51.787 [INFO][4655] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f73796d03dac922376871a21cab0ae8506d92dfe645e92ba96f1b8310878edf0" Namespace="calico-system" Pod="csi-node-driver-dc7fc" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-csi--node--driver--dc7fc-eth0" Aug 13 00:03:52.049725 env[1583]: 2025-08-13 00:03:51.879 [INFO][4675] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f73796d03dac922376871a21cab0ae8506d92dfe645e92ba96f1b8310878edf0" HandleID="k8s-pod-network.f73796d03dac922376871a21cab0ae8506d92dfe645e92ba96f1b8310878edf0" Workload="ci--3510.3.8--a--dd293077f6-k8s-csi--node--driver--dc7fc-eth0" Aug 13 00:03:52.049725 env[1583]: 2025-08-13 00:03:51.879 [INFO][4675] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f73796d03dac922376871a21cab0ae8506d92dfe645e92ba96f1b8310878edf0" HandleID="k8s-pod-network.f73796d03dac922376871a21cab0ae8506d92dfe645e92ba96f1b8310878edf0" Workload="ci--3510.3.8--a--dd293077f6-k8s-csi--node--driver--dc7fc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cb310), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.8-a-dd293077f6", "pod":"csi-node-driver-dc7fc", "timestamp":"2025-08-13 00:03:51.875420036 +0000 UTC"}, Hostname:"ci-3510.3.8-a-dd293077f6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:03:52.049725 env[1583]: 2025-08-13 00:03:51.880 [INFO][4675] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:03:52.049725 env[1583]: 2025-08-13 00:03:51.885 [INFO][4675] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:03:52.049725 env[1583]: 2025-08-13 00:03:51.885 [INFO][4675] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-a-dd293077f6' Aug 13 00:03:52.049725 env[1583]: 2025-08-13 00:03:51.939 [INFO][4675] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f73796d03dac922376871a21cab0ae8506d92dfe645e92ba96f1b8310878edf0" host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:52.049725 env[1583]: 2025-08-13 00:03:51.959 [INFO][4675] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:52.049725 env[1583]: 2025-08-13 00:03:51.967 [INFO][4675] ipam/ipam.go 511: Trying affinity for 192.168.55.64/26 host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:52.049725 env[1583]: 2025-08-13 00:03:51.971 [INFO][4675] ipam/ipam.go 158: Attempting to load block cidr=192.168.55.64/26 host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:52.049725 env[1583]: 2025-08-13 00:03:51.974 [INFO][4675] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.55.64/26 host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:52.049725 env[1583]: 2025-08-13 00:03:51.974 [INFO][4675] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.55.64/26 handle="k8s-pod-network.f73796d03dac922376871a21cab0ae8506d92dfe645e92ba96f1b8310878edf0" host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:52.049725 env[1583]: 2025-08-13 00:03:51.977 [INFO][4675] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f73796d03dac922376871a21cab0ae8506d92dfe645e92ba96f1b8310878edf0 Aug 13 00:03:52.049725 env[1583]: 2025-08-13 00:03:51.984 [INFO][4675] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.55.64/26 handle="k8s-pod-network.f73796d03dac922376871a21cab0ae8506d92dfe645e92ba96f1b8310878edf0" host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:52.049725 env[1583]: 2025-08-13 00:03:51.994 [INFO][4675] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.55.70/26] block=192.168.55.64/26 handle="k8s-pod-network.f73796d03dac922376871a21cab0ae8506d92dfe645e92ba96f1b8310878edf0" host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:52.049725 env[1583]: 2025-08-13 00:03:51.994 [INFO][4675] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.55.70/26] handle="k8s-pod-network.f73796d03dac922376871a21cab0ae8506d92dfe645e92ba96f1b8310878edf0" host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:52.049725 env[1583]: 2025-08-13 00:03:51.994 [INFO][4675] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:03:52.049725 env[1583]: 2025-08-13 00:03:51.994 [INFO][4675] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.55.70/26] IPv6=[] ContainerID="f73796d03dac922376871a21cab0ae8506d92dfe645e92ba96f1b8310878edf0" HandleID="k8s-pod-network.f73796d03dac922376871a21cab0ae8506d92dfe645e92ba96f1b8310878edf0" Workload="ci--3510.3.8--a--dd293077f6-k8s-csi--node--driver--dc7fc-eth0" Aug 13 00:03:52.050330 env[1583]: 2025-08-13 00:03:52.001 [INFO][4655] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f73796d03dac922376871a21cab0ae8506d92dfe645e92ba96f1b8310878edf0" Namespace="calico-system" Pod="csi-node-driver-dc7fc" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-csi--node--driver--dc7fc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--a--dd293077f6-k8s-csi--node--driver--dc7fc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8e788131-5ffd-4005-9137-e23c17af1da5", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 3, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-a-dd293077f6", ContainerID:"", Pod:"csi-node-driver-dc7fc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.55.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie8e42674a46", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:03:52.050330 env[1583]: 2025-08-13 00:03:52.001 [INFO][4655] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.55.70/32] ContainerID="f73796d03dac922376871a21cab0ae8506d92dfe645e92ba96f1b8310878edf0" Namespace="calico-system" Pod="csi-node-driver-dc7fc" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-csi--node--driver--dc7fc-eth0" Aug 13 00:03:52.050330 env[1583]: 2025-08-13 00:03:52.001 [INFO][4655] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie8e42674a46 ContainerID="f73796d03dac922376871a21cab0ae8506d92dfe645e92ba96f1b8310878edf0" Namespace="calico-system" Pod="csi-node-driver-dc7fc" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-csi--node--driver--dc7fc-eth0" Aug 13 00:03:52.050330 env[1583]: 2025-08-13 00:03:52.025 [INFO][4655] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f73796d03dac922376871a21cab0ae8506d92dfe645e92ba96f1b8310878edf0" Namespace="calico-system" Pod="csi-node-driver-dc7fc" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-csi--node--driver--dc7fc-eth0" Aug 13 00:03:52.050330 env[1583]: 2025-08-13 00:03:52.026 [INFO][4655] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f73796d03dac922376871a21cab0ae8506d92dfe645e92ba96f1b8310878edf0" Namespace="calico-system" Pod="csi-node-driver-dc7fc" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-csi--node--driver--dc7fc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--a--dd293077f6-k8s-csi--node--driver--dc7fc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8e788131-5ffd-4005-9137-e23c17af1da5", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 3, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-a-dd293077f6", ContainerID:"f73796d03dac922376871a21cab0ae8506d92dfe645e92ba96f1b8310878edf0", Pod:"csi-node-driver-dc7fc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.55.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie8e42674a46", MAC:"76:3f:39:00:5a:80", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:03:52.050330 env[1583]: 2025-08-13 00:03:52.047 [INFO][4655] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f73796d03dac922376871a21cab0ae8506d92dfe645e92ba96f1b8310878edf0" Namespace="calico-system" Pod="csi-node-driver-dc7fc" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-csi--node--driver--dc7fc-eth0" Aug 13 00:03:52.068000 audit[4727]: NETFILTER_CFG table=filter:119 family=2 entries=58 op=nft_register_chain pid=4727 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:03:52.068000 audit[4727]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=27180 a0=3 a1=ffffc85c6f30 a2=0 a3=ffff9f456fa8 items=0 ppid=3940 pid=4727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:52.068000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:03:52.095146 env[1583]: time="2025-08-13T00:03:52.095070950Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:03:52.095400 env[1583]: time="2025-08-13T00:03:52.095350790Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:03:52.095510 env[1583]: time="2025-08-13T00:03:52.095487630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:03:52.095886 env[1583]: time="2025-08-13T00:03:52.095850030Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f73796d03dac922376871a21cab0ae8506d92dfe645e92ba96f1b8310878edf0 pid=4735 runtime=io.containerd.runc.v2 Aug 13 00:03:52.119829 env[1583]: time="2025-08-13T00:03:52.119788474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-ktbwq,Uid:3816201b-b93a-4ec2-a67a-d16b5eed4f52,Namespace:kube-system,Attempt:1,} returns sandbox id \"6c7a15c00d4b5b9367baf3a3abcc3ef71e3b826b78a90e7c716b25cd32c4c3e8\"" Aug 13 00:03:52.124479 env[1583]: time="2025-08-13T00:03:52.124441955Z" level=info msg="CreateContainer within sandbox \"6c7a15c00d4b5b9367baf3a3abcc3ef71e3b826b78a90e7c716b25cd32c4c3e8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:03:52.168201 env[1583]: time="2025-08-13T00:03:52.168160721Z" level=info msg="CreateContainer within sandbox \"6c7a15c00d4b5b9367baf3a3abcc3ef71e3b826b78a90e7c716b25cd32c4c3e8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"129271874ea37b2b92b8be8ca51e199e3616a495edc97af0ffb157653392a9bc\"" Aug 13 00:03:52.171718 env[1583]: time="2025-08-13T00:03:52.170642322Z" level=info msg="StartContainer for \"129271874ea37b2b92b8be8ca51e199e3616a495edc97af0ffb157653392a9bc\"" Aug 13 00:03:52.193828 env[1583]: time="2025-08-13T00:03:52.193786765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dc7fc,Uid:8e788131-5ffd-4005-9137-e23c17af1da5,Namespace:calico-system,Attempt:1,} returns sandbox id \"f73796d03dac922376871a21cab0ae8506d92dfe645e92ba96f1b8310878edf0\"" Aug 13 00:03:52.469383 env[1583]: time="2025-08-13T00:03:52.469263128Z" level=info msg="StartContainer for \"129271874ea37b2b92b8be8ca51e199e3616a495edc97af0ffb157653392a9bc\" returns successfully" Aug 13 00:03:52.492942 env[1583]: time="2025-08-13T00:03:52.492896532Z" level=info msg="StopPodSandbox for \"290797fbee0b26f2bc94d36961517bd53f4c376b37486958785c84c5762ffa70\"" Aug 13 00:03:52.511776 env[1583]: time="2025-08-13T00:03:52.511733175Z" level=info msg="StopPodSandbox for \"8e4d5e54bd38f42e4141c2ebfbd61aaae12d6dfe133d0116ce035461054e071d\"" Aug 13 00:03:52.663732 env[1583]: 2025-08-13 00:03:52.591 [INFO][4838] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8e4d5e54bd38f42e4141c2ebfbd61aaae12d6dfe133d0116ce035461054e071d" Aug 13 00:03:52.663732 env[1583]: 2025-08-13 00:03:52.591 [INFO][4838] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8e4d5e54bd38f42e4141c2ebfbd61aaae12d6dfe133d0116ce035461054e071d" iface="eth0" netns="/var/run/netns/cni-6ed3bb5f-bd06-8663-860f-7e5fa73c9b9f" Aug 13 00:03:52.663732 env[1583]: 2025-08-13 00:03:52.593 [INFO][4838] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8e4d5e54bd38f42e4141c2ebfbd61aaae12d6dfe133d0116ce035461054e071d" iface="eth0" netns="/var/run/netns/cni-6ed3bb5f-bd06-8663-860f-7e5fa73c9b9f" Aug 13 00:03:52.663732 env[1583]: 2025-08-13 00:03:52.593 [INFO][4838] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8e4d5e54bd38f42e4141c2ebfbd61aaae12d6dfe133d0116ce035461054e071d" iface="eth0" netns="/var/run/netns/cni-6ed3bb5f-bd06-8663-860f-7e5fa73c9b9f" Aug 13 00:03:52.663732 env[1583]: 2025-08-13 00:03:52.594 [INFO][4838] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8e4d5e54bd38f42e4141c2ebfbd61aaae12d6dfe133d0116ce035461054e071d" Aug 13 00:03:52.663732 env[1583]: 2025-08-13 00:03:52.594 [INFO][4838] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8e4d5e54bd38f42e4141c2ebfbd61aaae12d6dfe133d0116ce035461054e071d" Aug 13 00:03:52.663732 env[1583]: 2025-08-13 00:03:52.646 [INFO][4852] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8e4d5e54bd38f42e4141c2ebfbd61aaae12d6dfe133d0116ce035461054e071d" HandleID="k8s-pod-network.8e4d5e54bd38f42e4141c2ebfbd61aaae12d6dfe133d0116ce035461054e071d" Workload="ci--3510.3.8--a--dd293077f6-k8s-goldmane--58fd7646b9--kcl6d-eth0" Aug 13 00:03:52.663732 env[1583]: 2025-08-13 00:03:52.646 [INFO][4852] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:03:52.663732 env[1583]: 2025-08-13 00:03:52.647 [INFO][4852] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:03:52.663732 env[1583]: 2025-08-13 00:03:52.657 [WARNING][4852] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8e4d5e54bd38f42e4141c2ebfbd61aaae12d6dfe133d0116ce035461054e071d" HandleID="k8s-pod-network.8e4d5e54bd38f42e4141c2ebfbd61aaae12d6dfe133d0116ce035461054e071d" Workload="ci--3510.3.8--a--dd293077f6-k8s-goldmane--58fd7646b9--kcl6d-eth0" Aug 13 00:03:52.663732 env[1583]: 2025-08-13 00:03:52.657 [INFO][4852] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8e4d5e54bd38f42e4141c2ebfbd61aaae12d6dfe133d0116ce035461054e071d" HandleID="k8s-pod-network.8e4d5e54bd38f42e4141c2ebfbd61aaae12d6dfe133d0116ce035461054e071d" Workload="ci--3510.3.8--a--dd293077f6-k8s-goldmane--58fd7646b9--kcl6d-eth0" Aug 13 00:03:52.663732 env[1583]: 2025-08-13 00:03:52.659 [INFO][4852] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:03:52.663732 env[1583]: 2025-08-13 00:03:52.662 [INFO][4838] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8e4d5e54bd38f42e4141c2ebfbd61aaae12d6dfe133d0116ce035461054e071d" Aug 13 00:03:52.667285 systemd[1]: run-netns-cni\x2d6ed3bb5f\x2dbd06\x2d8663\x2d860f\x2d7e5fa73c9b9f.mount: Deactivated successfully. Aug 13 00:03:52.668949 env[1583]: time="2025-08-13T00:03:52.668875320Z" level=info msg="TearDown network for sandbox \"8e4d5e54bd38f42e4141c2ebfbd61aaae12d6dfe133d0116ce035461054e071d\" successfully" Aug 13 00:03:52.669231 env[1583]: time="2025-08-13T00:03:52.669194520Z" level=info msg="StopPodSandbox for \"8e4d5e54bd38f42e4141c2ebfbd61aaae12d6dfe133d0116ce035461054e071d\" returns successfully" Aug 13 00:03:52.670054 env[1583]: time="2025-08-13T00:03:52.670023200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-kcl6d,Uid:ec1ef1a0-b4ac-42a1-9532-047e283102fa,Namespace:calico-system,Attempt:1,}" Aug 13 00:03:52.718725 env[1583]: 2025-08-13 00:03:52.628 [INFO][4845] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="290797fbee0b26f2bc94d36961517bd53f4c376b37486958785c84c5762ffa70" Aug 13 00:03:52.718725 env[1583]: 2025-08-13 00:03:52.628 [INFO][4845] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="290797fbee0b26f2bc94d36961517bd53f4c376b37486958785c84c5762ffa70" iface="eth0" netns="/var/run/netns/cni-e82c4037-aff7-cf46-cbdc-1d587597675e" Aug 13 00:03:52.718725 env[1583]: 2025-08-13 00:03:52.629 [INFO][4845] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="290797fbee0b26f2bc94d36961517bd53f4c376b37486958785c84c5762ffa70" iface="eth0" netns="/var/run/netns/cni-e82c4037-aff7-cf46-cbdc-1d587597675e" Aug 13 00:03:52.718725 env[1583]: 2025-08-13 00:03:52.629 [INFO][4845] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="290797fbee0b26f2bc94d36961517bd53f4c376b37486958785c84c5762ffa70" iface="eth0" netns="/var/run/netns/cni-e82c4037-aff7-cf46-cbdc-1d587597675e" Aug 13 00:03:52.718725 env[1583]: 2025-08-13 00:03:52.629 [INFO][4845] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="290797fbee0b26f2bc94d36961517bd53f4c376b37486958785c84c5762ffa70" Aug 13 00:03:52.718725 env[1583]: 2025-08-13 00:03:52.629 [INFO][4845] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="290797fbee0b26f2bc94d36961517bd53f4c376b37486958785c84c5762ffa70" Aug 13 00:03:52.718725 env[1583]: 2025-08-13 00:03:52.680 [INFO][4861] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="290797fbee0b26f2bc94d36961517bd53f4c376b37486958785c84c5762ffa70" HandleID="k8s-pod-network.290797fbee0b26f2bc94d36961517bd53f4c376b37486958785c84c5762ffa70" Workload="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--r5z5g-eth0" Aug 13 00:03:52.718725 env[1583]: 2025-08-13 00:03:52.680 [INFO][4861] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:03:52.718725 env[1583]: 2025-08-13 00:03:52.680 [INFO][4861] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:03:52.718725 env[1583]: 2025-08-13 00:03:52.694 [WARNING][4861] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="290797fbee0b26f2bc94d36961517bd53f4c376b37486958785c84c5762ffa70" HandleID="k8s-pod-network.290797fbee0b26f2bc94d36961517bd53f4c376b37486958785c84c5762ffa70" Workload="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--r5z5g-eth0" Aug 13 00:03:52.718725 env[1583]: 2025-08-13 00:03:52.694 [INFO][4861] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="290797fbee0b26f2bc94d36961517bd53f4c376b37486958785c84c5762ffa70" HandleID="k8s-pod-network.290797fbee0b26f2bc94d36961517bd53f4c376b37486958785c84c5762ffa70" Workload="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--r5z5g-eth0" Aug 13 00:03:52.718725 env[1583]: 2025-08-13 00:03:52.698 [INFO][4861] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:03:52.718725 env[1583]: 2025-08-13 00:03:52.716 [INFO][4845] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="290797fbee0b26f2bc94d36961517bd53f4c376b37486958785c84c5762ffa70" Aug 13 00:03:52.721495 systemd[1]: run-netns-cni\x2de82c4037\x2daff7\x2dcf46\x2dcbdc\x2d1d587597675e.mount: Deactivated successfully. Aug 13 00:03:52.723636 env[1583]: time="2025-08-13T00:03:52.723572528Z" level=info msg="TearDown network for sandbox \"290797fbee0b26f2bc94d36961517bd53f4c376b37486958785c84c5762ffa70\" successfully" Aug 13 00:03:52.726074 env[1583]: time="2025-08-13T00:03:52.723634928Z" level=info msg="StopPodSandbox for \"290797fbee0b26f2bc94d36961517bd53f4c376b37486958785c84c5762ffa70\" returns successfully" Aug 13 00:03:52.726074 env[1583]: time="2025-08-13T00:03:52.724670088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f97f8f466-r5z5g,Uid:87091bc5-d911-4547-82d0-decf534f50dd,Namespace:calico-apiserver,Attempt:1,}" Aug 13 00:03:52.834107 kubelet[2655]: I0813 00:03:52.833392 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-ktbwq" podStartSLOduration=43.833358625 podStartE2EDuration="43.833358625s" podCreationTimestamp="2025-08-13 00:03:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:03:52.833195105 +0000 UTC m=+50.512093891" watchObservedRunningTime="2025-08-13 00:03:52.833358625 +0000 UTC m=+50.512257371" Aug 13 00:03:52.879000 audit[4901]: NETFILTER_CFG table=filter:120 family=2 entries=12 op=nft_register_rule pid=4901 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:03:52.879000 audit[4901]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=fffffebcf150 a2=0 a3=1 items=0 ppid=2755 pid=4901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:52.879000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:03:52.881000 audit[4901]: NETFILTER_CFG table=nat:121 family=2 entries=46 op=nft_register_rule pid=4901 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:03:52.881000 audit[4901]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14964 a0=3 a1=fffffebcf150 a2=0 a3=1 items=0 ppid=2755 pid=4901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:52.881000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:03:52.922000 audit[4910]: NETFILTER_CFG table=filter:122 family=2 entries=12 op=nft_register_rule pid=4910 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:03:52.922000 audit[4910]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=ffffd607a580 a2=0 a3=1 items=0 ppid=2755 pid=4910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:52.922000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:03:52.954000 audit[4910]: NETFILTER_CFG table=nat:123 family=2 entries=58 op=nft_register_chain pid=4910 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:03:52.954000 audit[4910]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=20628 a0=3 a1=ffffd607a580 a2=0 a3=1 items=0 ppid=2755 pid=4910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:52.954000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:03:52.971017 systemd-networkd[1759]: calib0a1b86e791: Link UP Aug 13 00:03:52.983051 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Aug 13 00:03:52.983167 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calib0a1b86e791: link becomes ready Aug 13 00:03:52.985588 systemd-networkd[1759]: calib0a1b86e791: Gained carrier Aug 13 00:03:53.026734 env[1583]: 2025-08-13 00:03:52.775 [INFO][4868] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--a--dd293077f6-k8s-goldmane--58fd7646b9--kcl6d-eth0 goldmane-58fd7646b9- calico-system ec1ef1a0-b4ac-42a1-9532-047e283102fa 1021 0 2025-08-13 00:03:26 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-3510.3.8-a-dd293077f6 goldmane-58fd7646b9-kcl6d eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calib0a1b86e791 [] [] }} ContainerID="8b875ac0503feddbf0bbbf49c74c491ce0277e331c57d23a0a653a93a9b6e6dd" Namespace="calico-system" Pod="goldmane-58fd7646b9-kcl6d" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-goldmane--58fd7646b9--kcl6d-" Aug 13 00:03:53.026734 env[1583]: 2025-08-13 00:03:52.775 [INFO][4868] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8b875ac0503feddbf0bbbf49c74c491ce0277e331c57d23a0a653a93a9b6e6dd" Namespace="calico-system" Pod="goldmane-58fd7646b9-kcl6d" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-goldmane--58fd7646b9--kcl6d-eth0" Aug 13 00:03:53.026734 env[1583]: 2025-08-13 00:03:52.892 [INFO][4887] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8b875ac0503feddbf0bbbf49c74c491ce0277e331c57d23a0a653a93a9b6e6dd" HandleID="k8s-pod-network.8b875ac0503feddbf0bbbf49c74c491ce0277e331c57d23a0a653a93a9b6e6dd" Workload="ci--3510.3.8--a--dd293077f6-k8s-goldmane--58fd7646b9--kcl6d-eth0" Aug 13 00:03:53.026734 env[1583]: 2025-08-13 00:03:52.896 [INFO][4887] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8b875ac0503feddbf0bbbf49c74c491ce0277e331c57d23a0a653a93a9b6e6dd" HandleID="k8s-pod-network.8b875ac0503feddbf0bbbf49c74c491ce0277e331c57d23a0a653a93a9b6e6dd" Workload="ci--3510.3.8--a--dd293077f6-k8s-goldmane--58fd7646b9--kcl6d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cb640), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.8-a-dd293077f6", "pod":"goldmane-58fd7646b9-kcl6d", "timestamp":"2025-08-13 00:03:52.892276954 +0000 UTC"}, Hostname:"ci-3510.3.8-a-dd293077f6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:03:53.026734 env[1583]: 2025-08-13 00:03:52.896 [INFO][4887] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:03:53.026734 env[1583]: 2025-08-13 00:03:52.896 [INFO][4887] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:03:53.026734 env[1583]: 2025-08-13 00:03:52.897 [INFO][4887] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-a-dd293077f6' Aug 13 00:03:53.026734 env[1583]: 2025-08-13 00:03:52.912 [INFO][4887] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8b875ac0503feddbf0bbbf49c74c491ce0277e331c57d23a0a653a93a9b6e6dd" host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:53.026734 env[1583]: 2025-08-13 00:03:52.925 [INFO][4887] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:53.026734 env[1583]: 2025-08-13 00:03:52.931 [INFO][4887] ipam/ipam.go 511: Trying affinity for 192.168.55.64/26 host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:53.026734 env[1583]: 2025-08-13 00:03:52.935 [INFO][4887] ipam/ipam.go 158: Attempting to load block cidr=192.168.55.64/26 host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:53.026734 env[1583]: 2025-08-13 00:03:52.938 [INFO][4887] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.55.64/26 host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:53.026734 env[1583]: 2025-08-13 00:03:52.938 [INFO][4887] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.55.64/26 handle="k8s-pod-network.8b875ac0503feddbf0bbbf49c74c491ce0277e331c57d23a0a653a93a9b6e6dd" host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:53.026734 env[1583]: 2025-08-13 00:03:52.941 [INFO][4887] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8b875ac0503feddbf0bbbf49c74c491ce0277e331c57d23a0a653a93a9b6e6dd Aug 13 00:03:53.026734 env[1583]: 2025-08-13 00:03:52.947 [INFO][4887] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.55.64/26 handle="k8s-pod-network.8b875ac0503feddbf0bbbf49c74c491ce0277e331c57d23a0a653a93a9b6e6dd" host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:53.026734 env[1583]: 2025-08-13 00:03:52.961 [INFO][4887] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.55.71/26] block=192.168.55.64/26 handle="k8s-pod-network.8b875ac0503feddbf0bbbf49c74c491ce0277e331c57d23a0a653a93a9b6e6dd" host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:53.026734 env[1583]: 2025-08-13 00:03:52.961 [INFO][4887] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.55.71/26] handle="k8s-pod-network.8b875ac0503feddbf0bbbf49c74c491ce0277e331c57d23a0a653a93a9b6e6dd" host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:53.026734 env[1583]: 2025-08-13 00:03:52.961 [INFO][4887] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:03:53.026734 env[1583]: 2025-08-13 00:03:52.961 [INFO][4887] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.55.71/26] IPv6=[] ContainerID="8b875ac0503feddbf0bbbf49c74c491ce0277e331c57d23a0a653a93a9b6e6dd" HandleID="k8s-pod-network.8b875ac0503feddbf0bbbf49c74c491ce0277e331c57d23a0a653a93a9b6e6dd" Workload="ci--3510.3.8--a--dd293077f6-k8s-goldmane--58fd7646b9--kcl6d-eth0" Aug 13 00:03:53.027335 env[1583]: 2025-08-13 00:03:52.963 [INFO][4868] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8b875ac0503feddbf0bbbf49c74c491ce0277e331c57d23a0a653a93a9b6e6dd" Namespace="calico-system" Pod="goldmane-58fd7646b9-kcl6d" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-goldmane--58fd7646b9--kcl6d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--a--dd293077f6-k8s-goldmane--58fd7646b9--kcl6d-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"ec1ef1a0-b4ac-42a1-9532-047e283102fa", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 3, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-a-dd293077f6", ContainerID:"", Pod:"goldmane-58fd7646b9-kcl6d", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.55.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib0a1b86e791", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:03:53.027335 env[1583]: 2025-08-13 00:03:52.964 [INFO][4868] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.55.71/32] ContainerID="8b875ac0503feddbf0bbbf49c74c491ce0277e331c57d23a0a653a93a9b6e6dd" Namespace="calico-system" Pod="goldmane-58fd7646b9-kcl6d" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-goldmane--58fd7646b9--kcl6d-eth0" Aug 13 00:03:53.027335 env[1583]: 2025-08-13 00:03:52.964 [INFO][4868] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib0a1b86e791 ContainerID="8b875ac0503feddbf0bbbf49c74c491ce0277e331c57d23a0a653a93a9b6e6dd" Namespace="calico-system" Pod="goldmane-58fd7646b9-kcl6d" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-goldmane--58fd7646b9--kcl6d-eth0" Aug 13 00:03:53.027335 env[1583]: 2025-08-13 00:03:52.987 [INFO][4868] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8b875ac0503feddbf0bbbf49c74c491ce0277e331c57d23a0a653a93a9b6e6dd" Namespace="calico-system" Pod="goldmane-58fd7646b9-kcl6d" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-goldmane--58fd7646b9--kcl6d-eth0" Aug 13 00:03:53.027335 env[1583]: 2025-08-13 00:03:52.993 [INFO][4868] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8b875ac0503feddbf0bbbf49c74c491ce0277e331c57d23a0a653a93a9b6e6dd" Namespace="calico-system" Pod="goldmane-58fd7646b9-kcl6d" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-goldmane--58fd7646b9--kcl6d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--a--dd293077f6-k8s-goldmane--58fd7646b9--kcl6d-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"ec1ef1a0-b4ac-42a1-9532-047e283102fa", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 3, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-a-dd293077f6", ContainerID:"8b875ac0503feddbf0bbbf49c74c491ce0277e331c57d23a0a653a93a9b6e6dd", Pod:"goldmane-58fd7646b9-kcl6d", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.55.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib0a1b86e791", MAC:"f6:9c:8a:77:85:66", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:03:53.027335 env[1583]: 2025-08-13 00:03:53.019 [INFO][4868] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8b875ac0503feddbf0bbbf49c74c491ce0277e331c57d23a0a653a93a9b6e6dd" Namespace="calico-system" Pod="goldmane-58fd7646b9-kcl6d" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-goldmane--58fd7646b9--kcl6d-eth0" Aug 13 00:03:53.053000 audit[4924]: NETFILTER_CFG table=filter:124 family=2 entries=60 op=nft_register_chain pid=4924 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:03:53.053000 audit[4924]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=29916 a0=3 a1=ffffcf6b8df0 a2=0 a3=ffff98a8ffa8 items=0 ppid=3940 pid=4924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:53.053000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:03:53.076598 systemd-networkd[1759]: cali2c3373c6db8: Link UP Aug 13 00:03:53.077696 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali2c3373c6db8: link becomes ready Aug 13 00:03:53.078198 systemd-networkd[1759]: cali2c3373c6db8: Gained carrier Aug 13 00:03:53.084392 env[1583]: time="2025-08-13T00:03:53.074831463Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:03:53.084392 env[1583]: time="2025-08-13T00:03:53.074870463Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:03:53.084392 env[1583]: time="2025-08-13T00:03:53.074880663Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:03:53.084392 env[1583]: time="2025-08-13T00:03:53.075002863Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8b875ac0503feddbf0bbbf49c74c491ce0277e331c57d23a0a653a93a9b6e6dd pid=4932 runtime=io.containerd.runc.v2 Aug 13 00:03:53.104797 env[1583]: 2025-08-13 00:03:52.881 [INFO][4881] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--r5z5g-eth0 calico-apiserver-5f97f8f466- calico-apiserver 87091bc5-d911-4547-82d0-decf534f50dd 1022 0 2025-08-13 00:03:20 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5f97f8f466 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510.3.8-a-dd293077f6 calico-apiserver-5f97f8f466-r5z5g eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2c3373c6db8 [] [] }} ContainerID="bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b" Namespace="calico-apiserver" Pod="calico-apiserver-5f97f8f466-r5z5g" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--r5z5g-" Aug 13 00:03:53.104797 env[1583]: 2025-08-13 00:03:52.881 [INFO][4881] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b" Namespace="calico-apiserver" Pod="calico-apiserver-5f97f8f466-r5z5g" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--r5z5g-eth0" Aug 13 00:03:53.104797 env[1583]: 2025-08-13 00:03:52.930 [INFO][4905] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b" HandleID="k8s-pod-network.bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b" Workload="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--r5z5g-eth0" Aug 13 00:03:53.104797 env[1583]: 2025-08-13 00:03:52.930 [INFO][4905] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b" HandleID="k8s-pod-network.bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b" Workload="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--r5z5g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002caff0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510.3.8-a-dd293077f6", "pod":"calico-apiserver-5f97f8f466-r5z5g", "timestamp":"2025-08-13 00:03:52.93009172 +0000 UTC"}, Hostname:"ci-3510.3.8-a-dd293077f6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:03:53.104797 env[1583]: 2025-08-13 00:03:52.930 [INFO][4905] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:03:53.104797 env[1583]: 2025-08-13 00:03:52.961 [INFO][4905] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:03:53.104797 env[1583]: 2025-08-13 00:03:52.962 [INFO][4905] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-a-dd293077f6' Aug 13 00:03:53.104797 env[1583]: 2025-08-13 00:03:53.015 [INFO][4905] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b" host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:53.104797 env[1583]: 2025-08-13 00:03:53.026 [INFO][4905] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:53.104797 env[1583]: 2025-08-13 00:03:53.034 [INFO][4905] ipam/ipam.go 511: Trying affinity for 192.168.55.64/26 host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:53.104797 env[1583]: 2025-08-13 00:03:53.039 [INFO][4905] ipam/ipam.go 158: Attempting to load block cidr=192.168.55.64/26 host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:53.104797 env[1583]: 2025-08-13 00:03:53.042 [INFO][4905] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.55.64/26 host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:53.104797 env[1583]: 2025-08-13 00:03:53.043 [INFO][4905] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.55.64/26 handle="k8s-pod-network.bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b" host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:53.104797 env[1583]: 2025-08-13 00:03:53.045 [INFO][4905] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b Aug 13 00:03:53.104797 env[1583]: 2025-08-13 00:03:53.053 [INFO][4905] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.55.64/26 handle="k8s-pod-network.bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b" host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:53.104797 env[1583]: 2025-08-13 00:03:53.064 [INFO][4905] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.55.72/26] block=192.168.55.64/26 handle="k8s-pod-network.bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b" host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:53.104797 env[1583]: 2025-08-13 00:03:53.064 [INFO][4905] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.55.72/26] handle="k8s-pod-network.bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b" host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:53.104797 env[1583]: 2025-08-13 00:03:53.064 [INFO][4905] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:03:53.104797 env[1583]: 2025-08-13 00:03:53.064 [INFO][4905] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.55.72/26] IPv6=[] ContainerID="bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b" HandleID="k8s-pod-network.bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b" Workload="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--r5z5g-eth0" Aug 13 00:03:53.105375 env[1583]: 2025-08-13 00:03:53.066 [INFO][4881] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b" Namespace="calico-apiserver" Pod="calico-apiserver-5f97f8f466-r5z5g" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--r5z5g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--r5z5g-eth0", GenerateName:"calico-apiserver-5f97f8f466-", Namespace:"calico-apiserver", SelfLink:"", UID:"87091bc5-d911-4547-82d0-decf534f50dd", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 3, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f97f8f466", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-a-dd293077f6", ContainerID:"", Pod:"calico-apiserver-5f97f8f466-r5z5g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.55.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2c3373c6db8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:03:53.105375 env[1583]: 2025-08-13 00:03:53.066 [INFO][4881] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.55.72/32] ContainerID="bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b" Namespace="calico-apiserver" Pod="calico-apiserver-5f97f8f466-r5z5g" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--r5z5g-eth0" Aug 13 00:03:53.105375 env[1583]: 2025-08-13 00:03:53.066 [INFO][4881] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2c3373c6db8 ContainerID="bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b" Namespace="calico-apiserver" Pod="calico-apiserver-5f97f8f466-r5z5g" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--r5z5g-eth0" Aug 13 00:03:53.105375 env[1583]: 2025-08-13 00:03:53.080 [INFO][4881] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b" Namespace="calico-apiserver" Pod="calico-apiserver-5f97f8f466-r5z5g" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--r5z5g-eth0" Aug 13 00:03:53.105375 env[1583]: 2025-08-13 00:03:53.081 [INFO][4881] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b" Namespace="calico-apiserver" Pod="calico-apiserver-5f97f8f466-r5z5g" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--r5z5g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--r5z5g-eth0", GenerateName:"calico-apiserver-5f97f8f466-", Namespace:"calico-apiserver", SelfLink:"", UID:"87091bc5-d911-4547-82d0-decf534f50dd", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 3, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f97f8f466", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-a-dd293077f6", ContainerID:"bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b", Pod:"calico-apiserver-5f97f8f466-r5z5g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.55.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2c3373c6db8", MAC:"b6:05:8a:ca:99:c6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:03:53.105375 env[1583]: 2025-08-13 00:03:53.100 [INFO][4881] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b" Namespace="calico-apiserver" Pod="calico-apiserver-5f97f8f466-r5z5g" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--r5z5g-eth0" Aug 13 00:03:53.117959 env[1583]: time="2025-08-13T00:03:53.117920109Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:53.122071 env[1583]: time="2025-08-13T00:03:53.122021430Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:53.127400 env[1583]: time="2025-08-13T00:03:53.127365231Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:53.131000 audit[4961]: NETFILTER_CFG table=filter:125 family=2 entries=57 op=nft_register_chain pid=4961 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:03:53.131000 audit[4961]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=27812 a0=3 a1=fffffa66c600 a2=0 a3=ffffb7b3ffa8 items=0 ppid=3940 pid=4961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:53.131000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:03:53.133809 env[1583]: time="2025-08-13T00:03:53.133776032Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:53.134413 env[1583]: time="2025-08-13T00:03:53.134378952Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Aug 13 00:03:53.139616 env[1583]: time="2025-08-13T00:03:53.139337633Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 00:03:53.140965 env[1583]: time="2025-08-13T00:03:53.140927113Z" level=info msg="CreateContainer within sandbox \"6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 00:03:53.150464 env[1583]: time="2025-08-13T00:03:53.150363954Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:03:53.150725 env[1583]: time="2025-08-13T00:03:53.150699514Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:03:53.150839 env[1583]: time="2025-08-13T00:03:53.150816754Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:03:53.151215 env[1583]: time="2025-08-13T00:03:53.151175274Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b pid=4977 runtime=io.containerd.runc.v2 Aug 13 00:03:53.230903 systemd-networkd[1759]: calif97246af908: Gained IPv6LL Aug 13 00:03:53.254815 env[1583]: time="2025-08-13T00:03:53.254772290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-kcl6d,Uid:ec1ef1a0-b4ac-42a1-9532-047e283102fa,Namespace:calico-system,Attempt:1,} returns sandbox id \"8b875ac0503feddbf0bbbf49c74c491ce0277e331c57d23a0a653a93a9b6e6dd\"" Aug 13 00:03:53.269327 env[1583]: time="2025-08-13T00:03:53.269286493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f97f8f466-r5z5g,Uid:87091bc5-d911-4547-82d0-decf534f50dd,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b\"" Aug 13 00:03:53.272984 env[1583]: time="2025-08-13T00:03:53.272815293Z" level=info msg="CreateContainer within sandbox \"bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 00:03:53.485849 systemd-networkd[1759]: calie8e42674a46: Gained IPv6LL Aug 13 00:03:53.493010 env[1583]: time="2025-08-13T00:03:53.492175167Z" level=info msg="StopPodSandbox for \"939ded7457fd75b953830e6e1bc1c61feaa141835f32f937f88b682ada815021\"" Aug 13 00:03:53.550154 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount947450635.mount: Deactivated successfully. Aug 13 00:03:53.568011 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3026795534.mount: Deactivated successfully. Aug 13 00:03:53.598599 env[1583]: time="2025-08-13T00:03:53.598548583Z" level=info msg="CreateContainer within sandbox \"6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3849fecb990642e64d6223f5bfbce031ab33add9f1940853d1954f5a42761c94\"" Aug 13 00:03:53.604106 env[1583]: time="2025-08-13T00:03:53.604057704Z" level=info msg="StartContainer for \"3849fecb990642e64d6223f5bfbce031ab33add9f1940853d1954f5a42761c94\"" Aug 13 00:03:53.617243 env[1583]: time="2025-08-13T00:03:53.617197906Z" level=info msg="CreateContainer within sandbox \"bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"728d6d3fcb2409211057b60f33ad7235eaf0bd635e55a7fa02567aaf24659400\"" Aug 13 00:03:53.619906 env[1583]: time="2025-08-13T00:03:53.618652426Z" level=info msg="StartContainer for \"728d6d3fcb2409211057b60f33ad7235eaf0bd635e55a7fa02567aaf24659400\"" Aug 13 00:03:53.624678 env[1583]: 2025-08-13 00:03:53.569 [INFO][5029] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="939ded7457fd75b953830e6e1bc1c61feaa141835f32f937f88b682ada815021" Aug 13 00:03:53.624678 env[1583]: 2025-08-13 00:03:53.569 [INFO][5029] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="939ded7457fd75b953830e6e1bc1c61feaa141835f32f937f88b682ada815021" iface="eth0" netns="/var/run/netns/cni-32aa6e94-ab66-a112-262d-3974dda5e666" Aug 13 00:03:53.624678 env[1583]: 2025-08-13 00:03:53.570 [INFO][5029] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="939ded7457fd75b953830e6e1bc1c61feaa141835f32f937f88b682ada815021" iface="eth0" netns="/var/run/netns/cni-32aa6e94-ab66-a112-262d-3974dda5e666" Aug 13 00:03:53.624678 env[1583]: 2025-08-13 00:03:53.572 [INFO][5029] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="939ded7457fd75b953830e6e1bc1c61feaa141835f32f937f88b682ada815021" iface="eth0" netns="/var/run/netns/cni-32aa6e94-ab66-a112-262d-3974dda5e666" Aug 13 00:03:53.624678 env[1583]: 2025-08-13 00:03:53.572 [INFO][5029] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="939ded7457fd75b953830e6e1bc1c61feaa141835f32f937f88b682ada815021" Aug 13 00:03:53.624678 env[1583]: 2025-08-13 00:03:53.573 [INFO][5029] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="939ded7457fd75b953830e6e1bc1c61feaa141835f32f937f88b682ada815021" Aug 13 00:03:53.624678 env[1583]: 2025-08-13 00:03:53.596 [INFO][5038] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="939ded7457fd75b953830e6e1bc1c61feaa141835f32f937f88b682ada815021" HandleID="k8s-pod-network.939ded7457fd75b953830e6e1bc1c61feaa141835f32f937f88b682ada815021" Workload="ci--3510.3.8--a--dd293077f6-k8s-calico--kube--controllers--576869b9dc--vzbtv-eth0" Aug 13 00:03:53.624678 env[1583]: 2025-08-13 00:03:53.596 [INFO][5038] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:03:53.624678 env[1583]: 2025-08-13 00:03:53.596 [INFO][5038] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:03:53.624678 env[1583]: 2025-08-13 00:03:53.608 [WARNING][5038] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="939ded7457fd75b953830e6e1bc1c61feaa141835f32f937f88b682ada815021" HandleID="k8s-pod-network.939ded7457fd75b953830e6e1bc1c61feaa141835f32f937f88b682ada815021" Workload="ci--3510.3.8--a--dd293077f6-k8s-calico--kube--controllers--576869b9dc--vzbtv-eth0" Aug 13 00:03:53.624678 env[1583]: 2025-08-13 00:03:53.615 [INFO][5038] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="939ded7457fd75b953830e6e1bc1c61feaa141835f32f937f88b682ada815021" HandleID="k8s-pod-network.939ded7457fd75b953830e6e1bc1c61feaa141835f32f937f88b682ada815021" Workload="ci--3510.3.8--a--dd293077f6-k8s-calico--kube--controllers--576869b9dc--vzbtv-eth0" Aug 13 00:03:53.624678 env[1583]: 2025-08-13 00:03:53.620 [INFO][5038] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:03:53.624678 env[1583]: 2025-08-13 00:03:53.622 [INFO][5029] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="939ded7457fd75b953830e6e1bc1c61feaa141835f32f937f88b682ada815021" Aug 13 00:03:53.625393 env[1583]: time="2025-08-13T00:03:53.625334267Z" level=info msg="TearDown network for sandbox \"939ded7457fd75b953830e6e1bc1c61feaa141835f32f937f88b682ada815021\" successfully" Aug 13 00:03:53.625835 env[1583]: time="2025-08-13T00:03:53.625810547Z" level=info msg="StopPodSandbox for \"939ded7457fd75b953830e6e1bc1c61feaa141835f32f937f88b682ada815021\" returns successfully" Aug 13 00:03:53.626558 env[1583]: time="2025-08-13T00:03:53.626523907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-576869b9dc-vzbtv,Uid:1d5bf07f-d99d-4674-bcbd-ba9b13b0fbd9,Namespace:calico-system,Attempt:1,}" Aug 13 00:03:53.631108 env[1583]: time="2025-08-13T00:03:53.631074948Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:53.644390 env[1583]: time="2025-08-13T00:03:53.644353630Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:53.654293 env[1583]: time="2025-08-13T00:03:53.654247952Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:53.658069 env[1583]: time="2025-08-13T00:03:53.657956912Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:53.658420 env[1583]: time="2025-08-13T00:03:53.658257672Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Aug 13 00:03:53.663277 env[1583]: time="2025-08-13T00:03:53.663238993Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 13 00:03:53.663817 env[1583]: time="2025-08-13T00:03:53.663325393Z" level=info msg="CreateContainer within sandbox \"22bfa0c9cdb21b36f93f79caefea5b39bd337d232a1ed708983c09ce93ce82c7\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 00:03:53.714500 env[1583]: time="2025-08-13T00:03:53.714432121Z" level=info msg="StartContainer for \"3849fecb990642e64d6223f5bfbce031ab33add9f1940853d1954f5a42761c94\" returns successfully" Aug 13 00:03:53.722296 env[1583]: time="2025-08-13T00:03:53.722241722Z" level=info msg="CreateContainer within sandbox \"22bfa0c9cdb21b36f93f79caefea5b39bd337d232a1ed708983c09ce93ce82c7\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"15db22607227db147f13391a6dd607e7b27baca6273adc78f26c64773a854f41\"" Aug 13 00:03:53.725026 env[1583]: time="2025-08-13T00:03:53.724989283Z" level=info msg="StartContainer for \"15db22607227db147f13391a6dd607e7b27baca6273adc78f26c64773a854f41\"" Aug 13 00:03:53.753721 env[1583]: time="2025-08-13T00:03:53.747857086Z" level=info msg="StartContainer for \"728d6d3fcb2409211057b60f33ad7235eaf0bd635e55a7fa02567aaf24659400\" returns successfully" Aug 13 00:03:53.910546 kubelet[2655]: I0813 00:03:53.910260 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5f97f8f466-r5z5g" podStartSLOduration=33.910240351 podStartE2EDuration="33.910240351s" podCreationTimestamp="2025-08-13 00:03:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:03:53.878616666 +0000 UTC m=+51.557515412" watchObservedRunningTime="2025-08-13 00:03:53.910240351 +0000 UTC m=+51.589139057" Aug 13 00:03:53.911408 kubelet[2655]: I0813 00:03:53.911335 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5f97f8f466-rqg7k" podStartSLOduration=30.812718901 podStartE2EDuration="33.911309791s" podCreationTimestamp="2025-08-13 00:03:20 +0000 UTC" firstStartedPulling="2025-08-13 00:03:50.037078542 +0000 UTC m=+47.715977288" lastFinishedPulling="2025-08-13 00:03:53.135669432 +0000 UTC m=+50.814568178" observedRunningTime="2025-08-13 00:03:53.910459311 +0000 UTC m=+51.589358057" watchObservedRunningTime="2025-08-13 00:03:53.911309791 +0000 UTC m=+51.590208497" Aug 13 00:03:53.949844 env[1583]: time="2025-08-13T00:03:53.949799157Z" level=info msg="StartContainer for \"15db22607227db147f13391a6dd607e7b27baca6273adc78f26c64773a854f41\" returns successfully" Aug 13 00:03:53.955000 audit[5172]: NETFILTER_CFG table=filter:126 family=2 entries=12 op=nft_register_rule pid=5172 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:03:53.955000 audit[5172]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=ffffcd2e7610 a2=0 a3=1 items=0 ppid=2755 pid=5172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:53.955000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:03:53.979000 audit[5172]: NETFILTER_CFG table=nat:127 family=2 entries=22 op=nft_register_rule pid=5172 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:03:53.979000 audit[5172]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6540 a0=3 a1=ffffcd2e7610 a2=0 a3=1 items=0 ppid=2755 pid=5172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:53.979000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:03:54.035913 systemd-networkd[1759]: cali837eeda51f6: Link UP Aug 13 00:03:54.050368 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Aug 13 00:03:54.050490 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali837eeda51f6: link becomes ready Aug 13 00:03:54.056683 systemd-networkd[1759]: cali837eeda51f6: Gained carrier Aug 13 00:03:54.070841 env[1583]: 2025-08-13 00:03:53.831 [INFO][5107] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--a--dd293077f6-k8s-calico--kube--controllers--576869b9dc--vzbtv-eth0 calico-kube-controllers-576869b9dc- calico-system 1d5bf07f-d99d-4674-bcbd-ba9b13b0fbd9 1043 0 2025-08-13 00:03:26 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:576869b9dc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3510.3.8-a-dd293077f6 calico-kube-controllers-576869b9dc-vzbtv eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali837eeda51f6 [] [] }} ContainerID="5304f53a7749cd4041b7a1a4ec145b7a59c8eedef3e6df3b9a8ddaa41dd230a5" Namespace="calico-system" Pod="calico-kube-controllers-576869b9dc-vzbtv" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-calico--kube--controllers--576869b9dc--vzbtv-" Aug 13 00:03:54.070841 env[1583]: 2025-08-13 00:03:53.831 [INFO][5107] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5304f53a7749cd4041b7a1a4ec145b7a59c8eedef3e6df3b9a8ddaa41dd230a5" Namespace="calico-system" Pod="calico-kube-controllers-576869b9dc-vzbtv" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-calico--kube--controllers--576869b9dc--vzbtv-eth0" Aug 13 00:03:54.070841 env[1583]: 2025-08-13 00:03:53.945 [INFO][5157] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5304f53a7749cd4041b7a1a4ec145b7a59c8eedef3e6df3b9a8ddaa41dd230a5" HandleID="k8s-pod-network.5304f53a7749cd4041b7a1a4ec145b7a59c8eedef3e6df3b9a8ddaa41dd230a5" Workload="ci--3510.3.8--a--dd293077f6-k8s-calico--kube--controllers--576869b9dc--vzbtv-eth0" Aug 13 00:03:54.070841 env[1583]: 2025-08-13 00:03:53.946 [INFO][5157] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5304f53a7749cd4041b7a1a4ec145b7a59c8eedef3e6df3b9a8ddaa41dd230a5" HandleID="k8s-pod-network.5304f53a7749cd4041b7a1a4ec145b7a59c8eedef3e6df3b9a8ddaa41dd230a5" Workload="ci--3510.3.8--a--dd293077f6-k8s-calico--kube--controllers--576869b9dc--vzbtv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ab4a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.8-a-dd293077f6", "pod":"calico-kube-controllers-576869b9dc-vzbtv", "timestamp":"2025-08-13 00:03:53.944704556 +0000 UTC"}, Hostname:"ci-3510.3.8-a-dd293077f6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:03:54.070841 env[1583]: 2025-08-13 00:03:53.946 [INFO][5157] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:03:54.070841 env[1583]: 2025-08-13 00:03:53.946 [INFO][5157] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:03:54.070841 env[1583]: 2025-08-13 00:03:53.946 [INFO][5157] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-a-dd293077f6' Aug 13 00:03:54.070841 env[1583]: 2025-08-13 00:03:53.962 [INFO][5157] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5304f53a7749cd4041b7a1a4ec145b7a59c8eedef3e6df3b9a8ddaa41dd230a5" host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:54.070841 env[1583]: 2025-08-13 00:03:53.986 [INFO][5157] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:54.070841 env[1583]: 2025-08-13 00:03:53.993 [INFO][5157] ipam/ipam.go 511: Trying affinity for 192.168.55.64/26 host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:54.070841 env[1583]: 2025-08-13 00:03:53.996 [INFO][5157] ipam/ipam.go 158: Attempting to load block cidr=192.168.55.64/26 host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:54.070841 env[1583]: 2025-08-13 00:03:54.000 [INFO][5157] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.55.64/26 host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:54.070841 env[1583]: 2025-08-13 00:03:54.000 [INFO][5157] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.55.64/26 handle="k8s-pod-network.5304f53a7749cd4041b7a1a4ec145b7a59c8eedef3e6df3b9a8ddaa41dd230a5" host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:54.070841 env[1583]: 2025-08-13 00:03:54.003 [INFO][5157] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5304f53a7749cd4041b7a1a4ec145b7a59c8eedef3e6df3b9a8ddaa41dd230a5 Aug 13 00:03:54.070841 env[1583]: 2025-08-13 00:03:54.009 [INFO][5157] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.55.64/26 handle="k8s-pod-network.5304f53a7749cd4041b7a1a4ec145b7a59c8eedef3e6df3b9a8ddaa41dd230a5" host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:54.070841 env[1583]: 2025-08-13 00:03:54.018 [INFO][5157] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.55.73/26] block=192.168.55.64/26 handle="k8s-pod-network.5304f53a7749cd4041b7a1a4ec145b7a59c8eedef3e6df3b9a8ddaa41dd230a5" host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:54.070841 env[1583]: 2025-08-13 00:03:54.019 [INFO][5157] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.55.73/26] handle="k8s-pod-network.5304f53a7749cd4041b7a1a4ec145b7a59c8eedef3e6df3b9a8ddaa41dd230a5" host="ci-3510.3.8-a-dd293077f6" Aug 13 00:03:54.070841 env[1583]: 2025-08-13 00:03:54.019 [INFO][5157] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:03:54.070841 env[1583]: 2025-08-13 00:03:54.019 [INFO][5157] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.55.73/26] IPv6=[] ContainerID="5304f53a7749cd4041b7a1a4ec145b7a59c8eedef3e6df3b9a8ddaa41dd230a5" HandleID="k8s-pod-network.5304f53a7749cd4041b7a1a4ec145b7a59c8eedef3e6df3b9a8ddaa41dd230a5" Workload="ci--3510.3.8--a--dd293077f6-k8s-calico--kube--controllers--576869b9dc--vzbtv-eth0" Aug 13 00:03:54.071396 env[1583]: 2025-08-13 00:03:54.021 [INFO][5107] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5304f53a7749cd4041b7a1a4ec145b7a59c8eedef3e6df3b9a8ddaa41dd230a5" Namespace="calico-system" Pod="calico-kube-controllers-576869b9dc-vzbtv" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-calico--kube--controllers--576869b9dc--vzbtv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--a--dd293077f6-k8s-calico--kube--controllers--576869b9dc--vzbtv-eth0", GenerateName:"calico-kube-controllers-576869b9dc-", Namespace:"calico-system", SelfLink:"", UID:"1d5bf07f-d99d-4674-bcbd-ba9b13b0fbd9", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 3, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"576869b9dc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-a-dd293077f6", ContainerID:"", Pod:"calico-kube-controllers-576869b9dc-vzbtv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.55.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali837eeda51f6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:03:54.071396 env[1583]: 2025-08-13 00:03:54.021 [INFO][5107] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.55.73/32] ContainerID="5304f53a7749cd4041b7a1a4ec145b7a59c8eedef3e6df3b9a8ddaa41dd230a5" Namespace="calico-system" Pod="calico-kube-controllers-576869b9dc-vzbtv" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-calico--kube--controllers--576869b9dc--vzbtv-eth0" Aug 13 00:03:54.071396 env[1583]: 2025-08-13 00:03:54.021 [INFO][5107] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali837eeda51f6 ContainerID="5304f53a7749cd4041b7a1a4ec145b7a59c8eedef3e6df3b9a8ddaa41dd230a5" Namespace="calico-system" Pod="calico-kube-controllers-576869b9dc-vzbtv" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-calico--kube--controllers--576869b9dc--vzbtv-eth0" Aug 13 00:03:54.071396 env[1583]: 2025-08-13 00:03:54.057 [INFO][5107] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5304f53a7749cd4041b7a1a4ec145b7a59c8eedef3e6df3b9a8ddaa41dd230a5" Namespace="calico-system" Pod="calico-kube-controllers-576869b9dc-vzbtv" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-calico--kube--controllers--576869b9dc--vzbtv-eth0" Aug 13 00:03:54.071396 env[1583]: 2025-08-13 00:03:54.057 [INFO][5107] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5304f53a7749cd4041b7a1a4ec145b7a59c8eedef3e6df3b9a8ddaa41dd230a5" Namespace="calico-system" Pod="calico-kube-controllers-576869b9dc-vzbtv" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-calico--kube--controllers--576869b9dc--vzbtv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--a--dd293077f6-k8s-calico--kube--controllers--576869b9dc--vzbtv-eth0", GenerateName:"calico-kube-controllers-576869b9dc-", Namespace:"calico-system", SelfLink:"", UID:"1d5bf07f-d99d-4674-bcbd-ba9b13b0fbd9", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 3, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"576869b9dc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-a-dd293077f6", ContainerID:"5304f53a7749cd4041b7a1a4ec145b7a59c8eedef3e6df3b9a8ddaa41dd230a5", Pod:"calico-kube-controllers-576869b9dc-vzbtv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.55.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali837eeda51f6", MAC:"2e:e1:0e:72:8e:29", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:03:54.071396 env[1583]: 2025-08-13 00:03:54.069 [INFO][5107] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5304f53a7749cd4041b7a1a4ec145b7a59c8eedef3e6df3b9a8ddaa41dd230a5" Namespace="calico-system" Pod="calico-kube-controllers-576869b9dc-vzbtv" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-calico--kube--controllers--576869b9dc--vzbtv-eth0" Aug 13 00:03:54.100906 env[1583]: time="2025-08-13T00:03:54.099443660Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:03:54.100906 env[1583]: time="2025-08-13T00:03:54.099484700Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:03:54.100906 env[1583]: time="2025-08-13T00:03:54.099494980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:03:54.100906 env[1583]: time="2025-08-13T00:03:54.099610500Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5304f53a7749cd4041b7a1a4ec145b7a59c8eedef3e6df3b9a8ddaa41dd230a5 pid=5190 runtime=io.containerd.runc.v2 Aug 13 00:03:54.132000 audit[5208]: NETFILTER_CFG table=filter:128 family=2 entries=60 op=nft_register_chain pid=5208 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:03:54.132000 audit[5208]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=26688 a0=3 a1=fffff5705920 a2=0 a3=ffffa28a4fa8 items=0 ppid=3940 pid=5208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:54.132000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:03:54.238475 env[1583]: time="2025-08-13T00:03:54.238416721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-576869b9dc-vzbtv,Uid:1d5bf07f-d99d-4674-bcbd-ba9b13b0fbd9,Namespace:calico-system,Attempt:1,} returns sandbox id \"5304f53a7749cd4041b7a1a4ec145b7a59c8eedef3e6df3b9a8ddaa41dd230a5\"" Aug 13 00:03:54.254785 systemd-networkd[1759]: cali2c3373c6db8: Gained IPv6LL Aug 13 00:03:54.317840 systemd-networkd[1759]: calib0a1b86e791: Gained IPv6LL Aug 13 00:03:54.487458 systemd[1]: run-netns-cni\x2d32aa6e94\x2dab66\x2da112\x2d262d\x2d3974dda5e666.mount: Deactivated successfully. Aug 13 00:03:54.923948 kubelet[2655]: I0813 00:03:54.923871 2655 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:03:54.924317 kubelet[2655]: I0813 00:03:54.924248 2655 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:03:54.995724 kernel: kauditd_printk_skb: 35 callbacks suppressed Aug 13 00:03:54.995868 kernel: audit: type=1325 audit(1755043434.977:439): table=filter:129 family=2 entries=12 op=nft_register_rule pid=5230 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:03:54.977000 audit[5230]: NETFILTER_CFG table=filter:129 family=2 entries=12 op=nft_register_rule pid=5230 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:03:54.977000 audit[5230]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=ffffeedc73e0 a2=0 a3=1 items=0 ppid=2755 pid=5230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:55.024602 kernel: audit: type=1300 audit(1755043434.977:439): arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=ffffeedc73e0 a2=0 a3=1 items=0 ppid=2755 pid=5230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:54.977000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:03:55.038409 kernel: audit: type=1327 audit(1755043434.977:439): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:03:55.028000 audit[5230]: NETFILTER_CFG table=nat:130 family=2 entries=22 op=nft_register_rule pid=5230 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:03:55.053158 kernel: audit: type=1325 audit(1755043435.028:440): table=nat:130 family=2 entries=22 op=nft_register_rule pid=5230 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:03:55.053306 kernel: audit: type=1300 audit(1755043435.028:440): arch=c00000b7 syscall=211 success=yes exit=6540 a0=3 a1=ffffeedc73e0 a2=0 a3=1 items=0 ppid=2755 pid=5230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:55.028000 audit[5230]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6540 a0=3 a1=ffffeedc73e0 a2=0 a3=1 items=0 ppid=2755 pid=5230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:55.028000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:03:55.095320 kernel: audit: type=1327 audit(1755043435.028:440): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:03:55.106671 env[1583]: time="2025-08-13T00:03:55.106620012Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:55.112207 env[1583]: time="2025-08-13T00:03:55.112164213Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:55.116445 env[1583]: time="2025-08-13T00:03:55.116400014Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:55.120550 env[1583]: time="2025-08-13T00:03:55.120512454Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:55.121091 env[1583]: time="2025-08-13T00:03:55.121060254Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Aug 13 00:03:55.123280 env[1583]: time="2025-08-13T00:03:55.123238815Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Aug 13 00:03:55.124244 env[1583]: time="2025-08-13T00:03:55.124205935Z" level=info msg="CreateContainer within sandbox \"f73796d03dac922376871a21cab0ae8506d92dfe645e92ba96f1b8310878edf0\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 13 00:03:55.163069 env[1583]: time="2025-08-13T00:03:55.163007741Z" level=info msg="CreateContainer within sandbox \"f73796d03dac922376871a21cab0ae8506d92dfe645e92ba96f1b8310878edf0\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"976738c4afa4ab89e079fcd75978fdea32f961468c4a5e0c7665795d84389187\"" Aug 13 00:03:55.164052 env[1583]: time="2025-08-13T00:03:55.164008941Z" level=info msg="StartContainer for \"976738c4afa4ab89e079fcd75978fdea32f961468c4a5e0c7665795d84389187\"" Aug 13 00:03:55.292969 env[1583]: time="2025-08-13T00:03:55.292844840Z" level=info msg="StartContainer for \"976738c4afa4ab89e079fcd75978fdea32f961468c4a5e0c7665795d84389187\" returns successfully" Aug 13 00:03:55.726817 systemd-networkd[1759]: cali837eeda51f6: Gained IPv6LL Aug 13 00:03:55.929068 kubelet[2655]: I0813 00:03:55.929036 2655 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:03:55.929527 kubelet[2655]: I0813 00:03:55.929050 2655 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:03:56.214000 audit[5267]: NETFILTER_CFG table=filter:131 family=2 entries=12 op=nft_register_rule pid=5267 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:03:56.214000 audit[5267]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=ffffe08502c0 a2=0 a3=1 items=0 ppid=2755 pid=5267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:56.276922 kernel: audit: type=1325 audit(1755043436.214:441): table=filter:131 family=2 entries=12 op=nft_register_rule pid=5267 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:03:56.277051 kernel: audit: type=1300 audit(1755043436.214:441): arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=ffffe08502c0 a2=0 a3=1 items=0 ppid=2755 pid=5267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:56.214000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:03:56.306717 kernel: audit: type=1327 audit(1755043436.214:441): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:03:56.306895 kernel: audit: type=1325 audit(1755043436.238:442): table=nat:132 family=2 entries=22 op=nft_register_rule pid=5267 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:03:56.238000 audit[5267]: NETFILTER_CFG table=nat:132 family=2 entries=22 op=nft_register_rule pid=5267 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:03:56.238000 audit[5267]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6540 a0=3 a1=ffffe08502c0 a2=0 a3=1 items=0 ppid=2755 pid=5267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:56.238000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:03:57.348057 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3826714361.mount: Deactivated successfully. Aug 13 00:03:57.475686 kubelet[2655]: I0813 00:03:57.471375 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5664c8f75b-pc5h2" podStartSLOduration=31.677740845 podStartE2EDuration="34.471357122s" podCreationTimestamp="2025-08-13 00:03:23 +0000 UTC" firstStartedPulling="2025-08-13 00:03:50.866508516 +0000 UTC m=+48.545407262" lastFinishedPulling="2025-08-13 00:03:53.660124793 +0000 UTC m=+51.339023539" observedRunningTime="2025-08-13 00:03:54.931382106 +0000 UTC m=+52.610280852" watchObservedRunningTime="2025-08-13 00:03:57.471357122 +0000 UTC m=+55.150255868" Aug 13 00:03:57.524000 audit[5272]: NETFILTER_CFG table=filter:133 family=2 entries=11 op=nft_register_rule pid=5272 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:03:57.524000 audit[5272]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3760 a0=3 a1=ffffeb13db90 a2=0 a3=1 items=0 ppid=2755 pid=5272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:57.524000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:03:57.529000 audit[5272]: NETFILTER_CFG table=nat:134 family=2 entries=29 op=nft_register_chain pid=5272 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:03:57.529000 audit[5272]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=10116 a0=3 a1=ffffeb13db90 a2=0 a3=1 items=0 ppid=2755 pid=5272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:57.529000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:03:58.075099 env[1583]: time="2025-08-13T00:03:58.075056849Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:58.080529 env[1583]: time="2025-08-13T00:03:58.080488130Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:58.084507 env[1583]: time="2025-08-13T00:03:58.084470891Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:58.088196 env[1583]: time="2025-08-13T00:03:58.088162611Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:58.089578 env[1583]: time="2025-08-13T00:03:58.088984371Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Aug 13 00:03:58.092676 env[1583]: time="2025-08-13T00:03:58.092623532Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 00:03:58.093972 env[1583]: time="2025-08-13T00:03:58.093930652Z" level=info msg="CreateContainer within sandbox \"8b875ac0503feddbf0bbbf49c74c491ce0277e331c57d23a0a653a93a9b6e6dd\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Aug 13 00:03:58.117824 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount434001292.mount: Deactivated successfully. Aug 13 00:03:58.141080 env[1583]: time="2025-08-13T00:03:58.141030699Z" level=info msg="CreateContainer within sandbox \"8b875ac0503feddbf0bbbf49c74c491ce0277e331c57d23a0a653a93a9b6e6dd\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"f8e39e7b31e6506e0cdfb6ac5a659be058ceeb9f650275f824d88a249493d425\"" Aug 13 00:03:58.142942 env[1583]: time="2025-08-13T00:03:58.142906059Z" level=info msg="StartContainer for \"f8e39e7b31e6506e0cdfb6ac5a659be058ceeb9f650275f824d88a249493d425\"" Aug 13 00:03:58.184116 systemd[1]: run-containerd-runc-k8s.io-f8e39e7b31e6506e0cdfb6ac5a659be058ceeb9f650275f824d88a249493d425-runc.Lv899p.mount: Deactivated successfully. Aug 13 00:03:58.231989 env[1583]: time="2025-08-13T00:03:58.231936592Z" level=info msg="StartContainer for \"f8e39e7b31e6506e0cdfb6ac5a659be058ceeb9f650275f824d88a249493d425\" returns successfully" Aug 13 00:03:58.958701 kubelet[2655]: I0813 00:03:58.958620 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-kcl6d" podStartSLOduration=28.125222055 podStartE2EDuration="32.958601696s" podCreationTimestamp="2025-08-13 00:03:26 +0000 UTC" firstStartedPulling="2025-08-13 00:03:53.257033491 +0000 UTC m=+50.935932237" lastFinishedPulling="2025-08-13 00:03:58.090413132 +0000 UTC m=+55.769311878" observedRunningTime="2025-08-13 00:03:58.958053536 +0000 UTC m=+56.636952282" watchObservedRunningTime="2025-08-13 00:03:58.958601696 +0000 UTC m=+56.637500442" Aug 13 00:03:58.973000 audit[5324]: NETFILTER_CFG table=filter:135 family=2 entries=10 op=nft_register_rule pid=5324 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:03:58.973000 audit[5324]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3760 a0=3 a1=ffffeffe5a90 a2=0 a3=1 items=0 ppid=2755 pid=5324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:58.973000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:03:58.975000 audit[5324]: NETFILTER_CFG table=nat:136 family=2 entries=24 op=nft_register_rule pid=5324 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:03:58.975000 audit[5324]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7308 a0=3 a1=ffffeffe5a90 a2=0 a3=1 items=0 ppid=2755 pid=5324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:58.975000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:03:59.990432 systemd[1]: run-containerd-runc-k8s.io-f8e39e7b31e6506e0cdfb6ac5a659be058ceeb9f650275f824d88a249493d425-runc.ae9nZd.mount: Deactivated successfully. Aug 13 00:04:00.399139 env[1583]: time="2025-08-13T00:04:00.399024820Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:04:00.404092 env[1583]: time="2025-08-13T00:04:00.404044981Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:04:00.407468 env[1583]: time="2025-08-13T00:04:00.407432021Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:04:00.410736 env[1583]: time="2025-08-13T00:04:00.410703821Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:04:00.411319 env[1583]: time="2025-08-13T00:04:00.411281862Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Aug 13 00:04:00.413500 env[1583]: time="2025-08-13T00:04:00.412718422Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 13 00:04:00.431826 env[1583]: time="2025-08-13T00:04:00.431783224Z" level=info msg="CreateContainer within sandbox \"5304f53a7749cd4041b7a1a4ec145b7a59c8eedef3e6df3b9a8ddaa41dd230a5\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 13 00:04:00.437400 kubelet[2655]: I0813 00:04:00.437298 2655 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:04:00.476327 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1043975901.mount: Deactivated successfully. Aug 13 00:04:00.489168 env[1583]: time="2025-08-13T00:04:00.489119432Z" level=info msg="CreateContainer within sandbox \"5304f53a7749cd4041b7a1a4ec145b7a59c8eedef3e6df3b9a8ddaa41dd230a5\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"154f461bd420d918710d70623b8ac785ac07b094ea66e514e860bd8144da402e\"" Aug 13 00:04:00.490172 env[1583]: time="2025-08-13T00:04:00.490138593Z" level=info msg="StartContainer for \"154f461bd420d918710d70623b8ac785ac07b094ea66e514e860bd8144da402e\"" Aug 13 00:04:00.499000 audit[5359]: NETFILTER_CFG table=filter:137 family=2 entries=10 op=nft_register_rule pid=5359 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:04:00.505250 kernel: kauditd_printk_skb: 14 callbacks suppressed Aug 13 00:04:00.505409 kernel: audit: type=1325 audit(1755043440.499:447): table=filter:137 family=2 entries=10 op=nft_register_rule pid=5359 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:04:00.499000 audit[5359]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3760 a0=3 a1=fffffd8b1680 a2=0 a3=1 items=0 ppid=2755 pid=5359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:04:00.499000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:04:00.579297 kernel: audit: type=1300 audit(1755043440.499:447): arch=c00000b7 syscall=211 success=yes exit=3760 a0=3 a1=fffffd8b1680 a2=0 a3=1 items=0 ppid=2755 pid=5359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:04:00.579496 kernel: audit: type=1327 audit(1755043440.499:447): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:04:00.536000 audit[5359]: NETFILTER_CFG table=nat:138 family=2 entries=36 op=nft_register_chain pid=5359 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:04:00.594576 kernel: audit: type=1325 audit(1755043440.536:448): table=nat:138 family=2 entries=36 op=nft_register_chain pid=5359 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:04:00.536000 audit[5359]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=12004 a0=3 a1=fffffd8b1680 a2=0 a3=1 items=0 ppid=2755 pid=5359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:04:00.622144 kernel: audit: type=1300 audit(1755043440.536:448): arch=c00000b7 syscall=211 success=yes exit=12004 a0=3 a1=fffffd8b1680 a2=0 a3=1 items=0 ppid=2755 pid=5359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:04:00.536000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:04:00.636828 kernel: audit: type=1327 audit(1755043440.536:448): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:04:00.656482 env[1583]: time="2025-08-13T00:04:00.656258216Z" level=info msg="StartContainer for \"154f461bd420d918710d70623b8ac785ac07b094ea66e514e860bd8144da402e\" returns successfully" Aug 13 00:04:01.041466 kubelet[2655]: I0813 00:04:01.040965 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-576869b9dc-vzbtv" podStartSLOduration=28.868309089 podStartE2EDuration="35.04094631s" podCreationTimestamp="2025-08-13 00:03:26 +0000 UTC" firstStartedPulling="2025-08-13 00:03:54.239920881 +0000 UTC m=+51.918819627" lastFinishedPulling="2025-08-13 00:04:00.412558102 +0000 UTC m=+58.091456848" observedRunningTime="2025-08-13 00:04:00.965692059 +0000 UTC m=+58.644590805" watchObservedRunningTime="2025-08-13 00:04:01.04094631 +0000 UTC m=+58.719845056" Aug 13 00:04:01.107000 audit[5438]: NETFILTER_CFG table=filter:139 family=2 entries=9 op=nft_register_rule pid=5438 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:04:01.107000 audit[5438]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3016 a0=3 a1=ffffe1cb3d70 a2=0 a3=1 items=0 ppid=2755 pid=5438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:04:01.149665 kernel: audit: type=1325 audit(1755043441.107:449): table=filter:139 family=2 entries=9 op=nft_register_rule pid=5438 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:04:01.149804 kernel: audit: type=1300 audit(1755043441.107:449): arch=c00000b7 syscall=211 success=yes exit=3016 a0=3 a1=ffffe1cb3d70 a2=0 a3=1 items=0 ppid=2755 pid=5438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:04:01.107000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:04:01.163753 kernel: audit: type=1327 audit(1755043441.107:449): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:04:01.149000 audit[5438]: NETFILTER_CFG table=nat:140 family=2 entries=31 op=nft_register_chain pid=5438 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:04:01.180405 kernel: audit: type=1325 audit(1755043441.149:450): table=nat:140 family=2 entries=31 op=nft_register_chain pid=5438 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:04:01.149000 audit[5438]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=10884 a0=3 a1=ffffe1cb3d70 a2=0 a3=1 items=0 ppid=2755 pid=5438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:04:01.149000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:04:02.200854 env[1583]: time="2025-08-13T00:04:02.200811310Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:04:02.205937 env[1583]: time="2025-08-13T00:04:02.205896950Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:04:02.208993 env[1583]: time="2025-08-13T00:04:02.208952071Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:04:02.212918 env[1583]: time="2025-08-13T00:04:02.212851831Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:04:02.213167 env[1583]: time="2025-08-13T00:04:02.213136351Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Aug 13 00:04:02.217860 env[1583]: time="2025-08-13T00:04:02.216337392Z" level=info msg="CreateContainer within sandbox \"f73796d03dac922376871a21cab0ae8506d92dfe645e92ba96f1b8310878edf0\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 13 00:04:02.249126 env[1583]: time="2025-08-13T00:04:02.249071636Z" level=info msg="CreateContainer within sandbox \"f73796d03dac922376871a21cab0ae8506d92dfe645e92ba96f1b8310878edf0\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"deab88ae7802be203aa47b68ed7af58d61738e94c1c43eb9b6b1ef2f702370fa\"" Aug 13 00:04:02.249963 env[1583]: time="2025-08-13T00:04:02.249933676Z" level=info msg="StartContainer for \"deab88ae7802be203aa47b68ed7af58d61738e94c1c43eb9b6b1ef2f702370fa\"" Aug 13 00:04:02.419538 systemd[1]: run-containerd-runc-k8s.io-deab88ae7802be203aa47b68ed7af58d61738e94c1c43eb9b6b1ef2f702370fa-runc.3EQIMV.mount: Deactivated successfully. Aug 13 00:04:02.526185 env[1583]: time="2025-08-13T00:04:02.525694674Z" level=info msg="StopPodSandbox for \"290797fbee0b26f2bc94d36961517bd53f4c376b37486958785c84c5762ffa70\"" Aug 13 00:04:02.618755 env[1583]: 2025-08-13 00:04:02.565 [WARNING][5472] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="290797fbee0b26f2bc94d36961517bd53f4c376b37486958785c84c5762ffa70" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--r5z5g-eth0", GenerateName:"calico-apiserver-5f97f8f466-", Namespace:"calico-apiserver", SelfLink:"", UID:"87091bc5-d911-4547-82d0-decf534f50dd", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 3, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f97f8f466", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-a-dd293077f6", ContainerID:"bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b", Pod:"calico-apiserver-5f97f8f466-r5z5g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.55.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2c3373c6db8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:04:02.618755 env[1583]: 2025-08-13 00:04:02.565 [INFO][5472] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="290797fbee0b26f2bc94d36961517bd53f4c376b37486958785c84c5762ffa70" Aug 13 00:04:02.618755 env[1583]: 2025-08-13 00:04:02.565 [INFO][5472] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="290797fbee0b26f2bc94d36961517bd53f4c376b37486958785c84c5762ffa70" iface="eth0" netns="" Aug 13 00:04:02.618755 env[1583]: 2025-08-13 00:04:02.565 [INFO][5472] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="290797fbee0b26f2bc94d36961517bd53f4c376b37486958785c84c5762ffa70" Aug 13 00:04:02.618755 env[1583]: 2025-08-13 00:04:02.565 [INFO][5472] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="290797fbee0b26f2bc94d36961517bd53f4c376b37486958785c84c5762ffa70" Aug 13 00:04:02.618755 env[1583]: 2025-08-13 00:04:02.603 [INFO][5480] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="290797fbee0b26f2bc94d36961517bd53f4c376b37486958785c84c5762ffa70" HandleID="k8s-pod-network.290797fbee0b26f2bc94d36961517bd53f4c376b37486958785c84c5762ffa70" Workload="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--r5z5g-eth0" Aug 13 00:04:02.618755 env[1583]: 2025-08-13 00:04:02.603 [INFO][5480] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:04:02.618755 env[1583]: 2025-08-13 00:04:02.603 [INFO][5480] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:04:02.618755 env[1583]: 2025-08-13 00:04:02.613 [WARNING][5480] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="290797fbee0b26f2bc94d36961517bd53f4c376b37486958785c84c5762ffa70" HandleID="k8s-pod-network.290797fbee0b26f2bc94d36961517bd53f4c376b37486958785c84c5762ffa70" Workload="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--r5z5g-eth0" Aug 13 00:04:02.618755 env[1583]: 2025-08-13 00:04:02.613 [INFO][5480] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="290797fbee0b26f2bc94d36961517bd53f4c376b37486958785c84c5762ffa70" HandleID="k8s-pod-network.290797fbee0b26f2bc94d36961517bd53f4c376b37486958785c84c5762ffa70" Workload="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--r5z5g-eth0" Aug 13 00:04:02.618755 env[1583]: 2025-08-13 00:04:02.616 [INFO][5480] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:04:02.618755 env[1583]: 2025-08-13 00:04:02.617 [INFO][5472] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="290797fbee0b26f2bc94d36961517bd53f4c376b37486958785c84c5762ffa70" Aug 13 00:04:02.619287 env[1583]: time="2025-08-13T00:04:02.619252207Z" level=info msg="TearDown network for sandbox \"290797fbee0b26f2bc94d36961517bd53f4c376b37486958785c84c5762ffa70\" successfully" Aug 13 00:04:02.619363 env[1583]: time="2025-08-13T00:04:02.619346727Z" level=info msg="StopPodSandbox for \"290797fbee0b26f2bc94d36961517bd53f4c376b37486958785c84c5762ffa70\" returns successfully" Aug 13 00:04:02.620111 env[1583]: time="2025-08-13T00:04:02.620083647Z" level=info msg="RemovePodSandbox for \"290797fbee0b26f2bc94d36961517bd53f4c376b37486958785c84c5762ffa70\"" Aug 13 00:04:02.620369 env[1583]: time="2025-08-13T00:04:02.620311687Z" level=info msg="Forcibly stopping sandbox \"290797fbee0b26f2bc94d36961517bd53f4c376b37486958785c84c5762ffa70\"" Aug 13 00:04:02.962814 kubelet[2655]: I0813 00:04:02.962780 2655 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 13 00:04:02.968376 kubelet[2655]: I0813 00:04:02.967935 2655 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 13 00:04:03.023187 env[1583]: 2025-08-13 00:04:02.712 [WARNING][5505] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="290797fbee0b26f2bc94d36961517bd53f4c376b37486958785c84c5762ffa70" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--r5z5g-eth0", GenerateName:"calico-apiserver-5f97f8f466-", Namespace:"calico-apiserver", SelfLink:"", UID:"87091bc5-d911-4547-82d0-decf534f50dd", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 3, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f97f8f466", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-a-dd293077f6", ContainerID:"bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b", Pod:"calico-apiserver-5f97f8f466-r5z5g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.55.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2c3373c6db8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:04:03.023187 env[1583]: 2025-08-13 00:04:02.966 [INFO][5505] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="290797fbee0b26f2bc94d36961517bd53f4c376b37486958785c84c5762ffa70" Aug 13 00:04:03.023187 env[1583]: 2025-08-13 00:04:02.966 [INFO][5505] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="290797fbee0b26f2bc94d36961517bd53f4c376b37486958785c84c5762ffa70" iface="eth0" netns="" Aug 13 00:04:03.023187 env[1583]: 2025-08-13 00:04:02.966 [INFO][5505] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="290797fbee0b26f2bc94d36961517bd53f4c376b37486958785c84c5762ffa70" Aug 13 00:04:03.023187 env[1583]: 2025-08-13 00:04:02.966 [INFO][5505] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="290797fbee0b26f2bc94d36961517bd53f4c376b37486958785c84c5762ffa70" Aug 13 00:04:03.023187 env[1583]: 2025-08-13 00:04:03.007 [INFO][5512] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="290797fbee0b26f2bc94d36961517bd53f4c376b37486958785c84c5762ffa70" HandleID="k8s-pod-network.290797fbee0b26f2bc94d36961517bd53f4c376b37486958785c84c5762ffa70" Workload="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--r5z5g-eth0" Aug 13 00:04:03.023187 env[1583]: 2025-08-13 00:04:03.007 [INFO][5512] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:04:03.023187 env[1583]: 2025-08-13 00:04:03.008 [INFO][5512] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:04:03.023187 env[1583]: 2025-08-13 00:04:03.017 [WARNING][5512] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="290797fbee0b26f2bc94d36961517bd53f4c376b37486958785c84c5762ffa70" HandleID="k8s-pod-network.290797fbee0b26f2bc94d36961517bd53f4c376b37486958785c84c5762ffa70" Workload="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--r5z5g-eth0" Aug 13 00:04:03.023187 env[1583]: 2025-08-13 00:04:03.017 [INFO][5512] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="290797fbee0b26f2bc94d36961517bd53f4c376b37486958785c84c5762ffa70" HandleID="k8s-pod-network.290797fbee0b26f2bc94d36961517bd53f4c376b37486958785c84c5762ffa70" Workload="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--r5z5g-eth0" Aug 13 00:04:03.023187 env[1583]: 2025-08-13 00:04:03.020 [INFO][5512] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:04:03.023187 env[1583]: 2025-08-13 00:04:03.021 [INFO][5505] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="290797fbee0b26f2bc94d36961517bd53f4c376b37486958785c84c5762ffa70" Aug 13 00:04:03.023643 env[1583]: time="2025-08-13T00:04:03.023234302Z" level=info msg="TearDown network for sandbox \"290797fbee0b26f2bc94d36961517bd53f4c376b37486958785c84c5762ffa70\" successfully" Aug 13 00:04:03.209371 env[1583]: time="2025-08-13T00:04:03.209326127Z" level=info msg="StartContainer for \"deab88ae7802be203aa47b68ed7af58d61738e94c1c43eb9b6b1ef2f702370fa\" returns successfully" Aug 13 00:04:03.221476 env[1583]: time="2025-08-13T00:04:03.220173168Z" level=info msg="RemovePodSandbox \"290797fbee0b26f2bc94d36961517bd53f4c376b37486958785c84c5762ffa70\" returns successfully" Aug 13 00:04:03.225046 env[1583]: time="2025-08-13T00:04:03.225010729Z" level=info msg="StopPodSandbox for \"4e6c7c3adfa6c38531ec378d3b31a0f6a045dbba261f9b907a45ea913661b797\"" Aug 13 00:04:03.240731 kubelet[2655]: I0813 00:04:03.240224 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-dc7fc" podStartSLOduration=27.223257626 podStartE2EDuration="37.240150691s" podCreationTimestamp="2025-08-13 00:03:26 +0000 UTC" firstStartedPulling="2025-08-13 00:03:52.197634926 +0000 UTC m=+49.876533632" lastFinishedPulling="2025-08-13 00:04:02.214527951 +0000 UTC m=+59.893426697" observedRunningTime="2025-08-13 00:04:03.239626331 +0000 UTC m=+60.918525077" watchObservedRunningTime="2025-08-13 00:04:03.240150691 +0000 UTC m=+60.919049477" Aug 13 00:04:03.346027 env[1583]: 2025-08-13 00:04:03.310 [WARNING][5527] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4e6c7c3adfa6c38531ec378d3b31a0f6a045dbba261f9b907a45ea913661b797" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--rqg7k-eth0", GenerateName:"calico-apiserver-5f97f8f466-", Namespace:"calico-apiserver", SelfLink:"", UID:"77ef7b45-3d42-4c7d-b3d2-5b91108fefb3", ResourceVersion:"1098", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 3, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f97f8f466", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-a-dd293077f6", ContainerID:"6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a", Pod:"calico-apiserver-5f97f8f466-rqg7k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.55.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali48a074b18cb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:04:03.346027 env[1583]: 2025-08-13 00:04:03.311 [INFO][5527] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4e6c7c3adfa6c38531ec378d3b31a0f6a045dbba261f9b907a45ea913661b797" Aug 13 00:04:03.346027 env[1583]: 2025-08-13 00:04:03.311 [INFO][5527] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4e6c7c3adfa6c38531ec378d3b31a0f6a045dbba261f9b907a45ea913661b797" iface="eth0" netns="" Aug 13 00:04:03.346027 env[1583]: 2025-08-13 00:04:03.311 [INFO][5527] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4e6c7c3adfa6c38531ec378d3b31a0f6a045dbba261f9b907a45ea913661b797" Aug 13 00:04:03.346027 env[1583]: 2025-08-13 00:04:03.311 [INFO][5527] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4e6c7c3adfa6c38531ec378d3b31a0f6a045dbba261f9b907a45ea913661b797" Aug 13 00:04:03.346027 env[1583]: 2025-08-13 00:04:03.331 [INFO][5535] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4e6c7c3adfa6c38531ec378d3b31a0f6a045dbba261f9b907a45ea913661b797" HandleID="k8s-pod-network.4e6c7c3adfa6c38531ec378d3b31a0f6a045dbba261f9b907a45ea913661b797" Workload="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--rqg7k-eth0" Aug 13 00:04:03.346027 env[1583]: 2025-08-13 00:04:03.331 [INFO][5535] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:04:03.346027 env[1583]: 2025-08-13 00:04:03.331 [INFO][5535] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:04:03.346027 env[1583]: 2025-08-13 00:04:03.341 [WARNING][5535] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4e6c7c3adfa6c38531ec378d3b31a0f6a045dbba261f9b907a45ea913661b797" HandleID="k8s-pod-network.4e6c7c3adfa6c38531ec378d3b31a0f6a045dbba261f9b907a45ea913661b797" Workload="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--rqg7k-eth0" Aug 13 00:04:03.346027 env[1583]: 2025-08-13 00:04:03.341 [INFO][5535] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4e6c7c3adfa6c38531ec378d3b31a0f6a045dbba261f9b907a45ea913661b797" HandleID="k8s-pod-network.4e6c7c3adfa6c38531ec378d3b31a0f6a045dbba261f9b907a45ea913661b797" Workload="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--rqg7k-eth0" Aug 13 00:04:03.346027 env[1583]: 2025-08-13 00:04:03.343 [INFO][5535] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:04:03.346027 env[1583]: 2025-08-13 00:04:03.344 [INFO][5527] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4e6c7c3adfa6c38531ec378d3b31a0f6a045dbba261f9b907a45ea913661b797" Aug 13 00:04:03.346602 env[1583]: time="2025-08-13T00:04:03.346561946Z" level=info msg="TearDown network for sandbox \"4e6c7c3adfa6c38531ec378d3b31a0f6a045dbba261f9b907a45ea913661b797\" successfully" Aug 13 00:04:03.346769 env[1583]: time="2025-08-13T00:04:03.346747706Z" level=info msg="StopPodSandbox for \"4e6c7c3adfa6c38531ec378d3b31a0f6a045dbba261f9b907a45ea913661b797\" returns successfully" Aug 13 00:04:03.347346 env[1583]: time="2025-08-13T00:04:03.347315266Z" level=info msg="RemovePodSandbox for \"4e6c7c3adfa6c38531ec378d3b31a0f6a045dbba261f9b907a45ea913661b797\"" Aug 13 00:04:03.347485 env[1583]: time="2025-08-13T00:04:03.347445626Z" level=info msg="Forcibly stopping sandbox \"4e6c7c3adfa6c38531ec378d3b31a0f6a045dbba261f9b907a45ea913661b797\"" Aug 13 00:04:03.422900 env[1583]: 2025-08-13 00:04:03.387 [WARNING][5550] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4e6c7c3adfa6c38531ec378d3b31a0f6a045dbba261f9b907a45ea913661b797" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--rqg7k-eth0", GenerateName:"calico-apiserver-5f97f8f466-", Namespace:"calico-apiserver", SelfLink:"", UID:"77ef7b45-3d42-4c7d-b3d2-5b91108fefb3", ResourceVersion:"1098", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 3, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f97f8f466", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-a-dd293077f6", ContainerID:"6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a", Pod:"calico-apiserver-5f97f8f466-rqg7k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.55.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali48a074b18cb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:04:03.422900 env[1583]: 2025-08-13 00:04:03.387 [INFO][5550] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4e6c7c3adfa6c38531ec378d3b31a0f6a045dbba261f9b907a45ea913661b797" Aug 13 00:04:03.422900 env[1583]: 2025-08-13 00:04:03.387 [INFO][5550] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4e6c7c3adfa6c38531ec378d3b31a0f6a045dbba261f9b907a45ea913661b797" iface="eth0" netns="" Aug 13 00:04:03.422900 env[1583]: 2025-08-13 00:04:03.387 [INFO][5550] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4e6c7c3adfa6c38531ec378d3b31a0f6a045dbba261f9b907a45ea913661b797" Aug 13 00:04:03.422900 env[1583]: 2025-08-13 00:04:03.387 [INFO][5550] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4e6c7c3adfa6c38531ec378d3b31a0f6a045dbba261f9b907a45ea913661b797" Aug 13 00:04:03.422900 env[1583]: 2025-08-13 00:04:03.407 [INFO][5558] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4e6c7c3adfa6c38531ec378d3b31a0f6a045dbba261f9b907a45ea913661b797" HandleID="k8s-pod-network.4e6c7c3adfa6c38531ec378d3b31a0f6a045dbba261f9b907a45ea913661b797" Workload="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--rqg7k-eth0" Aug 13 00:04:03.422900 env[1583]: 2025-08-13 00:04:03.407 [INFO][5558] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:04:03.422900 env[1583]: 2025-08-13 00:04:03.407 [INFO][5558] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:04:03.422900 env[1583]: 2025-08-13 00:04:03.417 [WARNING][5558] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4e6c7c3adfa6c38531ec378d3b31a0f6a045dbba261f9b907a45ea913661b797" HandleID="k8s-pod-network.4e6c7c3adfa6c38531ec378d3b31a0f6a045dbba261f9b907a45ea913661b797" Workload="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--rqg7k-eth0" Aug 13 00:04:03.422900 env[1583]: 2025-08-13 00:04:03.417 [INFO][5558] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4e6c7c3adfa6c38531ec378d3b31a0f6a045dbba261f9b907a45ea913661b797" HandleID="k8s-pod-network.4e6c7c3adfa6c38531ec378d3b31a0f6a045dbba261f9b907a45ea913661b797" Workload="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--rqg7k-eth0" Aug 13 00:04:03.422900 env[1583]: 2025-08-13 00:04:03.420 [INFO][5558] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:04:03.422900 env[1583]: 2025-08-13 00:04:03.421 [INFO][5550] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4e6c7c3adfa6c38531ec378d3b31a0f6a045dbba261f9b907a45ea913661b797" Aug 13 00:04:03.423373 env[1583]: time="2025-08-13T00:04:03.422931676Z" level=info msg="TearDown network for sandbox \"4e6c7c3adfa6c38531ec378d3b31a0f6a045dbba261f9b907a45ea913661b797\" successfully" Aug 13 00:04:03.429553 env[1583]: time="2025-08-13T00:04:03.429504597Z" level=info msg="RemovePodSandbox \"4e6c7c3adfa6c38531ec378d3b31a0f6a045dbba261f9b907a45ea913661b797\" returns successfully" Aug 13 00:04:03.430230 env[1583]: time="2025-08-13T00:04:03.430176717Z" level=info msg="StopPodSandbox for \"fcbc5b445f8a3fc2a088e7b5456f1dd8ee26a08e7c21eb05f71e2143986882c2\"" Aug 13 00:04:03.504111 env[1583]: 2025-08-13 00:04:03.467 [WARNING][5572] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="fcbc5b445f8a3fc2a088e7b5456f1dd8ee26a08e7c21eb05f71e2143986882c2" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-whisker--88f556f98--v2t2q-eth0" Aug 13 00:04:03.504111 env[1583]: 2025-08-13 00:04:03.468 [INFO][5572] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fcbc5b445f8a3fc2a088e7b5456f1dd8ee26a08e7c21eb05f71e2143986882c2" Aug 13 00:04:03.504111 env[1583]: 2025-08-13 00:04:03.468 [INFO][5572] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fcbc5b445f8a3fc2a088e7b5456f1dd8ee26a08e7c21eb05f71e2143986882c2" iface="eth0" netns="" Aug 13 00:04:03.504111 env[1583]: 2025-08-13 00:04:03.468 [INFO][5572] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fcbc5b445f8a3fc2a088e7b5456f1dd8ee26a08e7c21eb05f71e2143986882c2" Aug 13 00:04:03.504111 env[1583]: 2025-08-13 00:04:03.468 [INFO][5572] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fcbc5b445f8a3fc2a088e7b5456f1dd8ee26a08e7c21eb05f71e2143986882c2" Aug 13 00:04:03.504111 env[1583]: 2025-08-13 00:04:03.488 [INFO][5579] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fcbc5b445f8a3fc2a088e7b5456f1dd8ee26a08e7c21eb05f71e2143986882c2" HandleID="k8s-pod-network.fcbc5b445f8a3fc2a088e7b5456f1dd8ee26a08e7c21eb05f71e2143986882c2" Workload="ci--3510.3.8--a--dd293077f6-k8s-whisker--88f556f98--v2t2q-eth0" Aug 13 00:04:03.504111 env[1583]: 2025-08-13 00:04:03.488 [INFO][5579] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:04:03.504111 env[1583]: 2025-08-13 00:04:03.488 [INFO][5579] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:04:03.504111 env[1583]: 2025-08-13 00:04:03.498 [WARNING][5579] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fcbc5b445f8a3fc2a088e7b5456f1dd8ee26a08e7c21eb05f71e2143986882c2" HandleID="k8s-pod-network.fcbc5b445f8a3fc2a088e7b5456f1dd8ee26a08e7c21eb05f71e2143986882c2" Workload="ci--3510.3.8--a--dd293077f6-k8s-whisker--88f556f98--v2t2q-eth0" Aug 13 00:04:03.504111 env[1583]: 2025-08-13 00:04:03.498 [INFO][5579] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fcbc5b445f8a3fc2a088e7b5456f1dd8ee26a08e7c21eb05f71e2143986882c2" HandleID="k8s-pod-network.fcbc5b445f8a3fc2a088e7b5456f1dd8ee26a08e7c21eb05f71e2143986882c2" Workload="ci--3510.3.8--a--dd293077f6-k8s-whisker--88f556f98--v2t2q-eth0" Aug 13 00:04:03.504111 env[1583]: 2025-08-13 00:04:03.500 [INFO][5579] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:04:03.504111 env[1583]: 2025-08-13 00:04:03.502 [INFO][5572] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fcbc5b445f8a3fc2a088e7b5456f1dd8ee26a08e7c21eb05f71e2143986882c2" Aug 13 00:04:03.505252 env[1583]: time="2025-08-13T00:04:03.505196687Z" level=info msg="TearDown network for sandbox \"fcbc5b445f8a3fc2a088e7b5456f1dd8ee26a08e7c21eb05f71e2143986882c2\" successfully" Aug 13 00:04:03.505337 env[1583]: time="2025-08-13T00:04:03.505318727Z" level=info msg="StopPodSandbox for \"fcbc5b445f8a3fc2a088e7b5456f1dd8ee26a08e7c21eb05f71e2143986882c2\" returns successfully" Aug 13 00:04:03.505920 env[1583]: time="2025-08-13T00:04:03.505895527Z" level=info msg="RemovePodSandbox for \"fcbc5b445f8a3fc2a088e7b5456f1dd8ee26a08e7c21eb05f71e2143986882c2\"" Aug 13 00:04:03.506140 env[1583]: time="2025-08-13T00:04:03.506097767Z" level=info msg="Forcibly stopping sandbox \"fcbc5b445f8a3fc2a088e7b5456f1dd8ee26a08e7c21eb05f71e2143986882c2\"" Aug 13 00:04:03.584533 env[1583]: 2025-08-13 00:04:03.548 [WARNING][5595] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="fcbc5b445f8a3fc2a088e7b5456f1dd8ee26a08e7c21eb05f71e2143986882c2" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-whisker--88f556f98--v2t2q-eth0" Aug 13 00:04:03.584533 env[1583]: 2025-08-13 00:04:03.548 [INFO][5595] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fcbc5b445f8a3fc2a088e7b5456f1dd8ee26a08e7c21eb05f71e2143986882c2" Aug 13 00:04:03.584533 env[1583]: 2025-08-13 00:04:03.548 [INFO][5595] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fcbc5b445f8a3fc2a088e7b5456f1dd8ee26a08e7c21eb05f71e2143986882c2" iface="eth0" netns="" Aug 13 00:04:03.584533 env[1583]: 2025-08-13 00:04:03.548 [INFO][5595] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fcbc5b445f8a3fc2a088e7b5456f1dd8ee26a08e7c21eb05f71e2143986882c2" Aug 13 00:04:03.584533 env[1583]: 2025-08-13 00:04:03.548 [INFO][5595] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fcbc5b445f8a3fc2a088e7b5456f1dd8ee26a08e7c21eb05f71e2143986882c2" Aug 13 00:04:03.584533 env[1583]: 2025-08-13 00:04:03.567 [INFO][5602] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fcbc5b445f8a3fc2a088e7b5456f1dd8ee26a08e7c21eb05f71e2143986882c2" HandleID="k8s-pod-network.fcbc5b445f8a3fc2a088e7b5456f1dd8ee26a08e7c21eb05f71e2143986882c2" Workload="ci--3510.3.8--a--dd293077f6-k8s-whisker--88f556f98--v2t2q-eth0" Aug 13 00:04:03.584533 env[1583]: 2025-08-13 00:04:03.568 [INFO][5602] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:04:03.584533 env[1583]: 2025-08-13 00:04:03.568 [INFO][5602] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:04:03.584533 env[1583]: 2025-08-13 00:04:03.578 [WARNING][5602] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fcbc5b445f8a3fc2a088e7b5456f1dd8ee26a08e7c21eb05f71e2143986882c2" HandleID="k8s-pod-network.fcbc5b445f8a3fc2a088e7b5456f1dd8ee26a08e7c21eb05f71e2143986882c2" Workload="ci--3510.3.8--a--dd293077f6-k8s-whisker--88f556f98--v2t2q-eth0" Aug 13 00:04:03.584533 env[1583]: 2025-08-13 00:04:03.578 [INFO][5602] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fcbc5b445f8a3fc2a088e7b5456f1dd8ee26a08e7c21eb05f71e2143986882c2" HandleID="k8s-pod-network.fcbc5b445f8a3fc2a088e7b5456f1dd8ee26a08e7c21eb05f71e2143986882c2" Workload="ci--3510.3.8--a--dd293077f6-k8s-whisker--88f556f98--v2t2q-eth0" Aug 13 00:04:03.584533 env[1583]: 2025-08-13 00:04:03.580 [INFO][5602] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:04:03.584533 env[1583]: 2025-08-13 00:04:03.582 [INFO][5595] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fcbc5b445f8a3fc2a088e7b5456f1dd8ee26a08e7c21eb05f71e2143986882c2" Aug 13 00:04:03.584974 env[1583]: time="2025-08-13T00:04:03.584566018Z" level=info msg="TearDown network for sandbox \"fcbc5b445f8a3fc2a088e7b5456f1dd8ee26a08e7c21eb05f71e2143986882c2\" successfully" Aug 13 00:04:03.592501 env[1583]: time="2025-08-13T00:04:03.592437219Z" level=info msg="RemovePodSandbox \"fcbc5b445f8a3fc2a088e7b5456f1dd8ee26a08e7c21eb05f71e2143986882c2\" returns successfully" Aug 13 00:04:03.593005 env[1583]: time="2025-08-13T00:04:03.592971579Z" level=info msg="StopPodSandbox for \"838fffdc175cdb2fd82e62d47433c5e5f3e885d3d9b33b5ebdd7fec9c479d8f0\"" Aug 13 00:04:03.665277 env[1583]: 2025-08-13 00:04:03.630 [WARNING][5616] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="838fffdc175cdb2fd82e62d47433c5e5f3e885d3d9b33b5ebdd7fec9c479d8f0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--a--dd293077f6-k8s-csi--node--driver--dc7fc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8e788131-5ffd-4005-9137-e23c17af1da5", ResourceVersion:"1131", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 3, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-a-dd293077f6", ContainerID:"f73796d03dac922376871a21cab0ae8506d92dfe645e92ba96f1b8310878edf0", Pod:"csi-node-driver-dc7fc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.55.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie8e42674a46", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:04:03.665277 env[1583]: 2025-08-13 00:04:03.630 [INFO][5616] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="838fffdc175cdb2fd82e62d47433c5e5f3e885d3d9b33b5ebdd7fec9c479d8f0" Aug 13 00:04:03.665277 env[1583]: 2025-08-13 00:04:03.630 [INFO][5616] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="838fffdc175cdb2fd82e62d47433c5e5f3e885d3d9b33b5ebdd7fec9c479d8f0" iface="eth0" netns="" Aug 13 00:04:03.665277 env[1583]: 2025-08-13 00:04:03.630 [INFO][5616] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="838fffdc175cdb2fd82e62d47433c5e5f3e885d3d9b33b5ebdd7fec9c479d8f0" Aug 13 00:04:03.665277 env[1583]: 2025-08-13 00:04:03.630 [INFO][5616] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="838fffdc175cdb2fd82e62d47433c5e5f3e885d3d9b33b5ebdd7fec9c479d8f0" Aug 13 00:04:03.665277 env[1583]: 2025-08-13 00:04:03.650 [INFO][5624] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="838fffdc175cdb2fd82e62d47433c5e5f3e885d3d9b33b5ebdd7fec9c479d8f0" HandleID="k8s-pod-network.838fffdc175cdb2fd82e62d47433c5e5f3e885d3d9b33b5ebdd7fec9c479d8f0" Workload="ci--3510.3.8--a--dd293077f6-k8s-csi--node--driver--dc7fc-eth0" Aug 13 00:04:03.665277 env[1583]: 2025-08-13 00:04:03.650 [INFO][5624] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:04:03.665277 env[1583]: 2025-08-13 00:04:03.650 [INFO][5624] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:04:03.665277 env[1583]: 2025-08-13 00:04:03.660 [WARNING][5624] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="838fffdc175cdb2fd82e62d47433c5e5f3e885d3d9b33b5ebdd7fec9c479d8f0" HandleID="k8s-pod-network.838fffdc175cdb2fd82e62d47433c5e5f3e885d3d9b33b5ebdd7fec9c479d8f0" Workload="ci--3510.3.8--a--dd293077f6-k8s-csi--node--driver--dc7fc-eth0" Aug 13 00:04:03.665277 env[1583]: 2025-08-13 00:04:03.660 [INFO][5624] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="838fffdc175cdb2fd82e62d47433c5e5f3e885d3d9b33b5ebdd7fec9c479d8f0" HandleID="k8s-pod-network.838fffdc175cdb2fd82e62d47433c5e5f3e885d3d9b33b5ebdd7fec9c479d8f0" Workload="ci--3510.3.8--a--dd293077f6-k8s-csi--node--driver--dc7fc-eth0" Aug 13 00:04:03.665277 env[1583]: 2025-08-13 00:04:03.662 [INFO][5624] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:04:03.665277 env[1583]: 2025-08-13 00:04:03.663 [INFO][5616] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="838fffdc175cdb2fd82e62d47433c5e5f3e885d3d9b33b5ebdd7fec9c479d8f0" Aug 13 00:04:03.665859 env[1583]: time="2025-08-13T00:04:03.665812069Z" level=info msg="TearDown network for sandbox \"838fffdc175cdb2fd82e62d47433c5e5f3e885d3d9b33b5ebdd7fec9c479d8f0\" successfully" Aug 13 00:04:03.665942 env[1583]: time="2025-08-13T00:04:03.665923869Z" level=info msg="StopPodSandbox for \"838fffdc175cdb2fd82e62d47433c5e5f3e885d3d9b33b5ebdd7fec9c479d8f0\" returns successfully" Aug 13 00:04:03.666647 env[1583]: time="2025-08-13T00:04:03.666613509Z" level=info msg="RemovePodSandbox for \"838fffdc175cdb2fd82e62d47433c5e5f3e885d3d9b33b5ebdd7fec9c479d8f0\"" Aug 13 00:04:03.666758 env[1583]: time="2025-08-13T00:04:03.666654189Z" level=info msg="Forcibly stopping sandbox \"838fffdc175cdb2fd82e62d47433c5e5f3e885d3d9b33b5ebdd7fec9c479d8f0\"" Aug 13 00:04:03.741736 env[1583]: 2025-08-13 00:04:03.703 [WARNING][5639] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="838fffdc175cdb2fd82e62d47433c5e5f3e885d3d9b33b5ebdd7fec9c479d8f0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--a--dd293077f6-k8s-csi--node--driver--dc7fc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8e788131-5ffd-4005-9137-e23c17af1da5", ResourceVersion:"1131", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 3, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-a-dd293077f6", ContainerID:"f73796d03dac922376871a21cab0ae8506d92dfe645e92ba96f1b8310878edf0", Pod:"csi-node-driver-dc7fc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.55.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie8e42674a46", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:04:03.741736 env[1583]: 2025-08-13 00:04:03.704 [INFO][5639] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="838fffdc175cdb2fd82e62d47433c5e5f3e885d3d9b33b5ebdd7fec9c479d8f0" Aug 13 00:04:03.741736 env[1583]: 2025-08-13 00:04:03.704 [INFO][5639] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="838fffdc175cdb2fd82e62d47433c5e5f3e885d3d9b33b5ebdd7fec9c479d8f0" iface="eth0" netns="" Aug 13 00:04:03.741736 env[1583]: 2025-08-13 00:04:03.704 [INFO][5639] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="838fffdc175cdb2fd82e62d47433c5e5f3e885d3d9b33b5ebdd7fec9c479d8f0" Aug 13 00:04:03.741736 env[1583]: 2025-08-13 00:04:03.704 [INFO][5639] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="838fffdc175cdb2fd82e62d47433c5e5f3e885d3d9b33b5ebdd7fec9c479d8f0" Aug 13 00:04:03.741736 env[1583]: 2025-08-13 00:04:03.726 [INFO][5646] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="838fffdc175cdb2fd82e62d47433c5e5f3e885d3d9b33b5ebdd7fec9c479d8f0" HandleID="k8s-pod-network.838fffdc175cdb2fd82e62d47433c5e5f3e885d3d9b33b5ebdd7fec9c479d8f0" Workload="ci--3510.3.8--a--dd293077f6-k8s-csi--node--driver--dc7fc-eth0" Aug 13 00:04:03.741736 env[1583]: 2025-08-13 00:04:03.726 [INFO][5646] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:04:03.741736 env[1583]: 2025-08-13 00:04:03.726 [INFO][5646] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:04:03.741736 env[1583]: 2025-08-13 00:04:03.736 [WARNING][5646] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="838fffdc175cdb2fd82e62d47433c5e5f3e885d3d9b33b5ebdd7fec9c479d8f0" HandleID="k8s-pod-network.838fffdc175cdb2fd82e62d47433c5e5f3e885d3d9b33b5ebdd7fec9c479d8f0" Workload="ci--3510.3.8--a--dd293077f6-k8s-csi--node--driver--dc7fc-eth0" Aug 13 00:04:03.741736 env[1583]: 2025-08-13 00:04:03.736 [INFO][5646] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="838fffdc175cdb2fd82e62d47433c5e5f3e885d3d9b33b5ebdd7fec9c479d8f0" HandleID="k8s-pod-network.838fffdc175cdb2fd82e62d47433c5e5f3e885d3d9b33b5ebdd7fec9c479d8f0" Workload="ci--3510.3.8--a--dd293077f6-k8s-csi--node--driver--dc7fc-eth0" Aug 13 00:04:03.741736 env[1583]: 2025-08-13 00:04:03.738 [INFO][5646] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:04:03.741736 env[1583]: 2025-08-13 00:04:03.740 [INFO][5639] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="838fffdc175cdb2fd82e62d47433c5e5f3e885d3d9b33b5ebdd7fec9c479d8f0" Aug 13 00:04:03.742287 env[1583]: time="2025-08-13T00:04:03.742237679Z" level=info msg="TearDown network for sandbox \"838fffdc175cdb2fd82e62d47433c5e5f3e885d3d9b33b5ebdd7fec9c479d8f0\" successfully" Aug 13 00:04:03.749212 env[1583]: time="2025-08-13T00:04:03.749168880Z" level=info msg="RemovePodSandbox \"838fffdc175cdb2fd82e62d47433c5e5f3e885d3d9b33b5ebdd7fec9c479d8f0\" returns successfully" Aug 13 00:04:03.749907 env[1583]: time="2025-08-13T00:04:03.749876880Z" level=info msg="StopPodSandbox for \"8e4d5e54bd38f42e4141c2ebfbd61aaae12d6dfe133d0116ce035461054e071d\"" Aug 13 00:04:03.821216 env[1583]: 2025-08-13 00:04:03.785 [WARNING][5660] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8e4d5e54bd38f42e4141c2ebfbd61aaae12d6dfe133d0116ce035461054e071d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--a--dd293077f6-k8s-goldmane--58fd7646b9--kcl6d-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"ec1ef1a0-b4ac-42a1-9532-047e283102fa", ResourceVersion:"1113", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 3, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-a-dd293077f6", ContainerID:"8b875ac0503feddbf0bbbf49c74c491ce0277e331c57d23a0a653a93a9b6e6dd", Pod:"goldmane-58fd7646b9-kcl6d", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.55.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib0a1b86e791", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:04:03.821216 env[1583]: 2025-08-13 00:04:03.785 [INFO][5660] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8e4d5e54bd38f42e4141c2ebfbd61aaae12d6dfe133d0116ce035461054e071d" Aug 13 00:04:03.821216 env[1583]: 2025-08-13 00:04:03.785 [INFO][5660] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8e4d5e54bd38f42e4141c2ebfbd61aaae12d6dfe133d0116ce035461054e071d" iface="eth0" netns="" Aug 13 00:04:03.821216 env[1583]: 2025-08-13 00:04:03.785 [INFO][5660] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8e4d5e54bd38f42e4141c2ebfbd61aaae12d6dfe133d0116ce035461054e071d" Aug 13 00:04:03.821216 env[1583]: 2025-08-13 00:04:03.785 [INFO][5660] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8e4d5e54bd38f42e4141c2ebfbd61aaae12d6dfe133d0116ce035461054e071d" Aug 13 00:04:03.821216 env[1583]: 2025-08-13 00:04:03.806 [INFO][5667] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8e4d5e54bd38f42e4141c2ebfbd61aaae12d6dfe133d0116ce035461054e071d" HandleID="k8s-pod-network.8e4d5e54bd38f42e4141c2ebfbd61aaae12d6dfe133d0116ce035461054e071d" Workload="ci--3510.3.8--a--dd293077f6-k8s-goldmane--58fd7646b9--kcl6d-eth0" Aug 13 00:04:03.821216 env[1583]: 2025-08-13 00:04:03.806 [INFO][5667] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:04:03.821216 env[1583]: 2025-08-13 00:04:03.806 [INFO][5667] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:04:03.821216 env[1583]: 2025-08-13 00:04:03.816 [WARNING][5667] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8e4d5e54bd38f42e4141c2ebfbd61aaae12d6dfe133d0116ce035461054e071d" HandleID="k8s-pod-network.8e4d5e54bd38f42e4141c2ebfbd61aaae12d6dfe133d0116ce035461054e071d" Workload="ci--3510.3.8--a--dd293077f6-k8s-goldmane--58fd7646b9--kcl6d-eth0" Aug 13 00:04:03.821216 env[1583]: 2025-08-13 00:04:03.816 [INFO][5667] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8e4d5e54bd38f42e4141c2ebfbd61aaae12d6dfe133d0116ce035461054e071d" HandleID="k8s-pod-network.8e4d5e54bd38f42e4141c2ebfbd61aaae12d6dfe133d0116ce035461054e071d" Workload="ci--3510.3.8--a--dd293077f6-k8s-goldmane--58fd7646b9--kcl6d-eth0" Aug 13 00:04:03.821216 env[1583]: 2025-08-13 00:04:03.818 [INFO][5667] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:04:03.821216 env[1583]: 2025-08-13 00:04:03.819 [INFO][5660] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8e4d5e54bd38f42e4141c2ebfbd61aaae12d6dfe133d0116ce035461054e071d" Aug 13 00:04:03.822960 env[1583]: time="2025-08-13T00:04:03.822911170Z" level=info msg="TearDown network for sandbox \"8e4d5e54bd38f42e4141c2ebfbd61aaae12d6dfe133d0116ce035461054e071d\" successfully" Aug 13 00:04:03.823042 env[1583]: time="2025-08-13T00:04:03.823025610Z" level=info msg="StopPodSandbox for \"8e4d5e54bd38f42e4141c2ebfbd61aaae12d6dfe133d0116ce035461054e071d\" returns successfully" Aug 13 00:04:03.823618 env[1583]: time="2025-08-13T00:04:03.823592010Z" level=info msg="RemovePodSandbox for \"8e4d5e54bd38f42e4141c2ebfbd61aaae12d6dfe133d0116ce035461054e071d\"" Aug 13 00:04:03.823893 env[1583]: time="2025-08-13T00:04:03.823840530Z" level=info msg="Forcibly stopping sandbox \"8e4d5e54bd38f42e4141c2ebfbd61aaae12d6dfe133d0116ce035461054e071d\"" Aug 13 00:04:03.895220 env[1583]: 2025-08-13 00:04:03.858 [WARNING][5681] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8e4d5e54bd38f42e4141c2ebfbd61aaae12d6dfe133d0116ce035461054e071d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--a--dd293077f6-k8s-goldmane--58fd7646b9--kcl6d-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"ec1ef1a0-b4ac-42a1-9532-047e283102fa", ResourceVersion:"1113", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 3, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-a-dd293077f6", ContainerID:"8b875ac0503feddbf0bbbf49c74c491ce0277e331c57d23a0a653a93a9b6e6dd", Pod:"goldmane-58fd7646b9-kcl6d", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.55.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib0a1b86e791", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:04:03.895220 env[1583]: 2025-08-13 00:04:03.859 [INFO][5681] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8e4d5e54bd38f42e4141c2ebfbd61aaae12d6dfe133d0116ce035461054e071d" Aug 13 00:04:03.895220 env[1583]: 2025-08-13 00:04:03.859 [INFO][5681] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8e4d5e54bd38f42e4141c2ebfbd61aaae12d6dfe133d0116ce035461054e071d" iface="eth0" netns="" Aug 13 00:04:03.895220 env[1583]: 2025-08-13 00:04:03.859 [INFO][5681] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8e4d5e54bd38f42e4141c2ebfbd61aaae12d6dfe133d0116ce035461054e071d" Aug 13 00:04:03.895220 env[1583]: 2025-08-13 00:04:03.859 [INFO][5681] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8e4d5e54bd38f42e4141c2ebfbd61aaae12d6dfe133d0116ce035461054e071d" Aug 13 00:04:03.895220 env[1583]: 2025-08-13 00:04:03.879 [INFO][5689] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8e4d5e54bd38f42e4141c2ebfbd61aaae12d6dfe133d0116ce035461054e071d" HandleID="k8s-pod-network.8e4d5e54bd38f42e4141c2ebfbd61aaae12d6dfe133d0116ce035461054e071d" Workload="ci--3510.3.8--a--dd293077f6-k8s-goldmane--58fd7646b9--kcl6d-eth0" Aug 13 00:04:03.895220 env[1583]: 2025-08-13 00:04:03.879 [INFO][5689] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:04:03.895220 env[1583]: 2025-08-13 00:04:03.879 [INFO][5689] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:04:03.895220 env[1583]: 2025-08-13 00:04:03.890 [WARNING][5689] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8e4d5e54bd38f42e4141c2ebfbd61aaae12d6dfe133d0116ce035461054e071d" HandleID="k8s-pod-network.8e4d5e54bd38f42e4141c2ebfbd61aaae12d6dfe133d0116ce035461054e071d" Workload="ci--3510.3.8--a--dd293077f6-k8s-goldmane--58fd7646b9--kcl6d-eth0" Aug 13 00:04:03.895220 env[1583]: 2025-08-13 00:04:03.890 [INFO][5689] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8e4d5e54bd38f42e4141c2ebfbd61aaae12d6dfe133d0116ce035461054e071d" HandleID="k8s-pod-network.8e4d5e54bd38f42e4141c2ebfbd61aaae12d6dfe133d0116ce035461054e071d" Workload="ci--3510.3.8--a--dd293077f6-k8s-goldmane--58fd7646b9--kcl6d-eth0" Aug 13 00:04:03.895220 env[1583]: 2025-08-13 00:04:03.892 [INFO][5689] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:04:03.895220 env[1583]: 2025-08-13 00:04:03.893 [INFO][5681] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8e4d5e54bd38f42e4141c2ebfbd61aaae12d6dfe133d0116ce035461054e071d" Aug 13 00:04:03.895771 env[1583]: time="2025-08-13T00:04:03.895724060Z" level=info msg="TearDown network for sandbox \"8e4d5e54bd38f42e4141c2ebfbd61aaae12d6dfe133d0116ce035461054e071d\" successfully" Aug 13 00:04:03.901928 env[1583]: time="2025-08-13T00:04:03.901885180Z" level=info msg="RemovePodSandbox \"8e4d5e54bd38f42e4141c2ebfbd61aaae12d6dfe133d0116ce035461054e071d\" returns successfully" Aug 13 00:04:03.902583 env[1583]: time="2025-08-13T00:04:03.902555941Z" level=info msg="StopPodSandbox for \"939ded7457fd75b953830e6e1bc1c61feaa141835f32f937f88b682ada815021\"" Aug 13 00:04:03.985965 env[1583]: 2025-08-13 00:04:03.945 [WARNING][5703] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="939ded7457fd75b953830e6e1bc1c61feaa141835f32f937f88b682ada815021" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--a--dd293077f6-k8s-calico--kube--controllers--576869b9dc--vzbtv-eth0", GenerateName:"calico-kube-controllers-576869b9dc-", Namespace:"calico-system", SelfLink:"", UID:"1d5bf07f-d99d-4674-bcbd-ba9b13b0fbd9", ResourceVersion:"1110", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 3, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"576869b9dc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-a-dd293077f6", ContainerID:"5304f53a7749cd4041b7a1a4ec145b7a59c8eedef3e6df3b9a8ddaa41dd230a5", Pod:"calico-kube-controllers-576869b9dc-vzbtv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.55.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali837eeda51f6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:04:03.985965 env[1583]: 2025-08-13 00:04:03.945 [INFO][5703] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="939ded7457fd75b953830e6e1bc1c61feaa141835f32f937f88b682ada815021" Aug 13 00:04:03.985965 env[1583]: 2025-08-13 00:04:03.945 [INFO][5703] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="939ded7457fd75b953830e6e1bc1c61feaa141835f32f937f88b682ada815021" iface="eth0" netns="" Aug 13 00:04:03.985965 env[1583]: 2025-08-13 00:04:03.945 [INFO][5703] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="939ded7457fd75b953830e6e1bc1c61feaa141835f32f937f88b682ada815021" Aug 13 00:04:03.985965 env[1583]: 2025-08-13 00:04:03.945 [INFO][5703] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="939ded7457fd75b953830e6e1bc1c61feaa141835f32f937f88b682ada815021" Aug 13 00:04:03.985965 env[1583]: 2025-08-13 00:04:03.971 [INFO][5710] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="939ded7457fd75b953830e6e1bc1c61feaa141835f32f937f88b682ada815021" HandleID="k8s-pod-network.939ded7457fd75b953830e6e1bc1c61feaa141835f32f937f88b682ada815021" Workload="ci--3510.3.8--a--dd293077f6-k8s-calico--kube--controllers--576869b9dc--vzbtv-eth0" Aug 13 00:04:03.985965 env[1583]: 2025-08-13 00:04:03.972 [INFO][5710] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:04:03.985965 env[1583]: 2025-08-13 00:04:03.972 [INFO][5710] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:04:03.985965 env[1583]: 2025-08-13 00:04:03.981 [WARNING][5710] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="939ded7457fd75b953830e6e1bc1c61feaa141835f32f937f88b682ada815021" HandleID="k8s-pod-network.939ded7457fd75b953830e6e1bc1c61feaa141835f32f937f88b682ada815021" Workload="ci--3510.3.8--a--dd293077f6-k8s-calico--kube--controllers--576869b9dc--vzbtv-eth0" Aug 13 00:04:03.985965 env[1583]: 2025-08-13 00:04:03.981 [INFO][5710] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="939ded7457fd75b953830e6e1bc1c61feaa141835f32f937f88b682ada815021" HandleID="k8s-pod-network.939ded7457fd75b953830e6e1bc1c61feaa141835f32f937f88b682ada815021" Workload="ci--3510.3.8--a--dd293077f6-k8s-calico--kube--controllers--576869b9dc--vzbtv-eth0" Aug 13 00:04:03.985965 env[1583]: 2025-08-13 00:04:03.983 [INFO][5710] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:04:03.985965 env[1583]: 2025-08-13 00:04:03.984 [INFO][5703] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="939ded7457fd75b953830e6e1bc1c61feaa141835f32f937f88b682ada815021" Aug 13 00:04:03.986652 env[1583]: time="2025-08-13T00:04:03.986606312Z" level=info msg="TearDown network for sandbox \"939ded7457fd75b953830e6e1bc1c61feaa141835f32f937f88b682ada815021\" successfully" Aug 13 00:04:03.986778 env[1583]: time="2025-08-13T00:04:03.986757512Z" level=info msg="StopPodSandbox for \"939ded7457fd75b953830e6e1bc1c61feaa141835f32f937f88b682ada815021\" returns successfully" Aug 13 00:04:03.987356 env[1583]: time="2025-08-13T00:04:03.987330592Z" level=info msg="RemovePodSandbox for \"939ded7457fd75b953830e6e1bc1c61feaa141835f32f937f88b682ada815021\"" Aug 13 00:04:03.987544 env[1583]: time="2025-08-13T00:04:03.987504432Z" level=info msg="Forcibly stopping sandbox \"939ded7457fd75b953830e6e1bc1c61feaa141835f32f937f88b682ada815021\"" Aug 13 00:04:04.067628 env[1583]: 2025-08-13 00:04:04.024 [WARNING][5724] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="939ded7457fd75b953830e6e1bc1c61feaa141835f32f937f88b682ada815021" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--a--dd293077f6-k8s-calico--kube--controllers--576869b9dc--vzbtv-eth0", GenerateName:"calico-kube-controllers-576869b9dc-", Namespace:"calico-system", SelfLink:"", UID:"1d5bf07f-d99d-4674-bcbd-ba9b13b0fbd9", ResourceVersion:"1110", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 3, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"576869b9dc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-a-dd293077f6", ContainerID:"5304f53a7749cd4041b7a1a4ec145b7a59c8eedef3e6df3b9a8ddaa41dd230a5", Pod:"calico-kube-controllers-576869b9dc-vzbtv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.55.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali837eeda51f6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:04:04.067628 env[1583]: 2025-08-13 00:04:04.024 [INFO][5724] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="939ded7457fd75b953830e6e1bc1c61feaa141835f32f937f88b682ada815021" Aug 13 00:04:04.067628 env[1583]: 2025-08-13 00:04:04.024 [INFO][5724] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="939ded7457fd75b953830e6e1bc1c61feaa141835f32f937f88b682ada815021" iface="eth0" netns="" Aug 13 00:04:04.067628 env[1583]: 2025-08-13 00:04:04.024 [INFO][5724] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="939ded7457fd75b953830e6e1bc1c61feaa141835f32f937f88b682ada815021" Aug 13 00:04:04.067628 env[1583]: 2025-08-13 00:04:04.024 [INFO][5724] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="939ded7457fd75b953830e6e1bc1c61feaa141835f32f937f88b682ada815021" Aug 13 00:04:04.067628 env[1583]: 2025-08-13 00:04:04.051 [INFO][5732] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="939ded7457fd75b953830e6e1bc1c61feaa141835f32f937f88b682ada815021" HandleID="k8s-pod-network.939ded7457fd75b953830e6e1bc1c61feaa141835f32f937f88b682ada815021" Workload="ci--3510.3.8--a--dd293077f6-k8s-calico--kube--controllers--576869b9dc--vzbtv-eth0" Aug 13 00:04:04.067628 env[1583]: 2025-08-13 00:04:04.052 [INFO][5732] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:04:04.067628 env[1583]: 2025-08-13 00:04:04.052 [INFO][5732] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:04:04.067628 env[1583]: 2025-08-13 00:04:04.062 [WARNING][5732] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="939ded7457fd75b953830e6e1bc1c61feaa141835f32f937f88b682ada815021" HandleID="k8s-pod-network.939ded7457fd75b953830e6e1bc1c61feaa141835f32f937f88b682ada815021" Workload="ci--3510.3.8--a--dd293077f6-k8s-calico--kube--controllers--576869b9dc--vzbtv-eth0" Aug 13 00:04:04.067628 env[1583]: 2025-08-13 00:04:04.062 [INFO][5732] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="939ded7457fd75b953830e6e1bc1c61feaa141835f32f937f88b682ada815021" HandleID="k8s-pod-network.939ded7457fd75b953830e6e1bc1c61feaa141835f32f937f88b682ada815021" Workload="ci--3510.3.8--a--dd293077f6-k8s-calico--kube--controllers--576869b9dc--vzbtv-eth0" Aug 13 00:04:04.067628 env[1583]: 2025-08-13 00:04:04.064 [INFO][5732] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:04:04.067628 env[1583]: 2025-08-13 00:04:04.066 [INFO][5724] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="939ded7457fd75b953830e6e1bc1c61feaa141835f32f937f88b682ada815021" Aug 13 00:04:04.068187 env[1583]: time="2025-08-13T00:04:04.068144763Z" level=info msg="TearDown network for sandbox \"939ded7457fd75b953830e6e1bc1c61feaa141835f32f937f88b682ada815021\" successfully" Aug 13 00:04:04.076343 env[1583]: time="2025-08-13T00:04:04.076223124Z" level=info msg="RemovePodSandbox \"939ded7457fd75b953830e6e1bc1c61feaa141835f32f937f88b682ada815021\" returns successfully" Aug 13 00:04:04.077924 env[1583]: time="2025-08-13T00:04:04.077875244Z" level=info msg="StopPodSandbox for \"1d4d0802d55401c4f07699d1e971d25cf3e5fc33dddc2477fff34bc92283e502\"" Aug 13 00:04:04.158751 env[1583]: 2025-08-13 00:04:04.113 [WARNING][5747] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1d4d0802d55401c4f07699d1e971d25cf3e5fc33dddc2477fff34bc92283e502" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--a--dd293077f6-k8s-coredns--7c65d6cfc9--gfdrg-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"527d02b8-3c8b-4d2b-ac23-d425550b3599", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 3, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-a-dd293077f6", ContainerID:"da1159cae28b753b6643381ff6974476a3eb1940e4fd4b3947fc9d632d38df5a", Pod:"coredns-7c65d6cfc9-gfdrg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.55.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5db8c1c60f0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:04:04.158751 env[1583]: 2025-08-13 00:04:04.113 [INFO][5747] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1d4d0802d55401c4f07699d1e971d25cf3e5fc33dddc2477fff34bc92283e502" Aug 13 00:04:04.158751 env[1583]: 2025-08-13 00:04:04.113 [INFO][5747] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1d4d0802d55401c4f07699d1e971d25cf3e5fc33dddc2477fff34bc92283e502" iface="eth0" netns="" Aug 13 00:04:04.158751 env[1583]: 2025-08-13 00:04:04.113 [INFO][5747] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1d4d0802d55401c4f07699d1e971d25cf3e5fc33dddc2477fff34bc92283e502" Aug 13 00:04:04.158751 env[1583]: 2025-08-13 00:04:04.113 [INFO][5747] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1d4d0802d55401c4f07699d1e971d25cf3e5fc33dddc2477fff34bc92283e502" Aug 13 00:04:04.158751 env[1583]: 2025-08-13 00:04:04.142 [INFO][5754] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1d4d0802d55401c4f07699d1e971d25cf3e5fc33dddc2477fff34bc92283e502" HandleID="k8s-pod-network.1d4d0802d55401c4f07699d1e971d25cf3e5fc33dddc2477fff34bc92283e502" Workload="ci--3510.3.8--a--dd293077f6-k8s-coredns--7c65d6cfc9--gfdrg-eth0" Aug 13 00:04:04.158751 env[1583]: 2025-08-13 00:04:04.142 [INFO][5754] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:04:04.158751 env[1583]: 2025-08-13 00:04:04.142 [INFO][5754] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:04:04.158751 env[1583]: 2025-08-13 00:04:04.153 [WARNING][5754] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1d4d0802d55401c4f07699d1e971d25cf3e5fc33dddc2477fff34bc92283e502" HandleID="k8s-pod-network.1d4d0802d55401c4f07699d1e971d25cf3e5fc33dddc2477fff34bc92283e502" Workload="ci--3510.3.8--a--dd293077f6-k8s-coredns--7c65d6cfc9--gfdrg-eth0" Aug 13 00:04:04.158751 env[1583]: 2025-08-13 00:04:04.153 [INFO][5754] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1d4d0802d55401c4f07699d1e971d25cf3e5fc33dddc2477fff34bc92283e502" HandleID="k8s-pod-network.1d4d0802d55401c4f07699d1e971d25cf3e5fc33dddc2477fff34bc92283e502" Workload="ci--3510.3.8--a--dd293077f6-k8s-coredns--7c65d6cfc9--gfdrg-eth0" Aug 13 00:04:04.158751 env[1583]: 2025-08-13 00:04:04.155 [INFO][5754] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:04:04.158751 env[1583]: 2025-08-13 00:04:04.156 [INFO][5747] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1d4d0802d55401c4f07699d1e971d25cf3e5fc33dddc2477fff34bc92283e502" Aug 13 00:04:04.159291 env[1583]: time="2025-08-13T00:04:04.159241615Z" level=info msg="TearDown network for sandbox \"1d4d0802d55401c4f07699d1e971d25cf3e5fc33dddc2477fff34bc92283e502\" successfully" Aug 13 00:04:04.159372 env[1583]: time="2025-08-13T00:04:04.159355295Z" level=info msg="StopPodSandbox for \"1d4d0802d55401c4f07699d1e971d25cf3e5fc33dddc2477fff34bc92283e502\" returns successfully" Aug 13 00:04:04.159955 env[1583]: time="2025-08-13T00:04:04.159927615Z" level=info msg="RemovePodSandbox for \"1d4d0802d55401c4f07699d1e971d25cf3e5fc33dddc2477fff34bc92283e502\"" Aug 13 00:04:04.160115 env[1583]: time="2025-08-13T00:04:04.160076135Z" level=info msg="Forcibly stopping sandbox \"1d4d0802d55401c4f07699d1e971d25cf3e5fc33dddc2477fff34bc92283e502\"" Aug 13 00:04:04.205847 systemd[1]: run-containerd-runc-k8s.io-f8e39e7b31e6506e0cdfb6ac5a659be058ceeb9f650275f824d88a249493d425-runc.9FSUYR.mount: Deactivated successfully. Aug 13 00:04:04.280205 env[1583]: 2025-08-13 00:04:04.223 [WARNING][5772] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1d4d0802d55401c4f07699d1e971d25cf3e5fc33dddc2477fff34bc92283e502" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--a--dd293077f6-k8s-coredns--7c65d6cfc9--gfdrg-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"527d02b8-3c8b-4d2b-ac23-d425550b3599", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 3, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-a-dd293077f6", ContainerID:"da1159cae28b753b6643381ff6974476a3eb1940e4fd4b3947fc9d632d38df5a", Pod:"coredns-7c65d6cfc9-gfdrg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.55.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5db8c1c60f0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:04:04.280205 env[1583]: 2025-08-13 00:04:04.223 [INFO][5772] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1d4d0802d55401c4f07699d1e971d25cf3e5fc33dddc2477fff34bc92283e502" Aug 13 00:04:04.280205 env[1583]: 2025-08-13 00:04:04.223 [INFO][5772] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1d4d0802d55401c4f07699d1e971d25cf3e5fc33dddc2477fff34bc92283e502" iface="eth0" netns="" Aug 13 00:04:04.280205 env[1583]: 2025-08-13 00:04:04.223 [INFO][5772] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1d4d0802d55401c4f07699d1e971d25cf3e5fc33dddc2477fff34bc92283e502" Aug 13 00:04:04.280205 env[1583]: 2025-08-13 00:04:04.223 [INFO][5772] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1d4d0802d55401c4f07699d1e971d25cf3e5fc33dddc2477fff34bc92283e502" Aug 13 00:04:04.280205 env[1583]: 2025-08-13 00:04:04.263 [INFO][5795] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1d4d0802d55401c4f07699d1e971d25cf3e5fc33dddc2477fff34bc92283e502" HandleID="k8s-pod-network.1d4d0802d55401c4f07699d1e971d25cf3e5fc33dddc2477fff34bc92283e502" Workload="ci--3510.3.8--a--dd293077f6-k8s-coredns--7c65d6cfc9--gfdrg-eth0" Aug 13 00:04:04.280205 env[1583]: 2025-08-13 00:04:04.263 [INFO][5795] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:04:04.280205 env[1583]: 2025-08-13 00:04:04.263 [INFO][5795] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:04:04.280205 env[1583]: 2025-08-13 00:04:04.274 [WARNING][5795] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1d4d0802d55401c4f07699d1e971d25cf3e5fc33dddc2477fff34bc92283e502" HandleID="k8s-pod-network.1d4d0802d55401c4f07699d1e971d25cf3e5fc33dddc2477fff34bc92283e502" Workload="ci--3510.3.8--a--dd293077f6-k8s-coredns--7c65d6cfc9--gfdrg-eth0" Aug 13 00:04:04.280205 env[1583]: 2025-08-13 00:04:04.274 [INFO][5795] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1d4d0802d55401c4f07699d1e971d25cf3e5fc33dddc2477fff34bc92283e502" HandleID="k8s-pod-network.1d4d0802d55401c4f07699d1e971d25cf3e5fc33dddc2477fff34bc92283e502" Workload="ci--3510.3.8--a--dd293077f6-k8s-coredns--7c65d6cfc9--gfdrg-eth0" Aug 13 00:04:04.280205 env[1583]: 2025-08-13 00:04:04.276 [INFO][5795] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:04:04.280205 env[1583]: 2025-08-13 00:04:04.278 [INFO][5772] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1d4d0802d55401c4f07699d1e971d25cf3e5fc33dddc2477fff34bc92283e502" Aug 13 00:04:04.280960 env[1583]: time="2025-08-13T00:04:04.280927431Z" level=info msg="TearDown network for sandbox \"1d4d0802d55401c4f07699d1e971d25cf3e5fc33dddc2477fff34bc92283e502\" successfully" Aug 13 00:04:04.287620 env[1583]: time="2025-08-13T00:04:04.287578272Z" level=info msg="RemovePodSandbox \"1d4d0802d55401c4f07699d1e971d25cf3e5fc33dddc2477fff34bc92283e502\" returns successfully" Aug 13 00:04:04.288297 env[1583]: time="2025-08-13T00:04:04.288257232Z" level=info msg="StopPodSandbox for \"edd091152171f7b069755787a194632b052918e3d75b084cdb4e231d1ed5c062\"" Aug 13 00:04:04.362182 env[1583]: 2025-08-13 00:04:04.324 [WARNING][5812] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="edd091152171f7b069755787a194632b052918e3d75b084cdb4e231d1ed5c062" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5664c8f75b--pc5h2-eth0", GenerateName:"calico-apiserver-5664c8f75b-", Namespace:"calico-apiserver", SelfLink:"", UID:"28ea706e-5d40-433e-9ee6-62a5f96b1be1", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 3, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5664c8f75b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-a-dd293077f6", ContainerID:"22bfa0c9cdb21b36f93f79caefea5b39bd337d232a1ed708983c09ce93ce82c7", Pod:"calico-apiserver-5664c8f75b-pc5h2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.55.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali10cd96087d8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:04:04.362182 env[1583]: 2025-08-13 00:04:04.324 [INFO][5812] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="edd091152171f7b069755787a194632b052918e3d75b084cdb4e231d1ed5c062" Aug 13 00:04:04.362182 env[1583]: 2025-08-13 00:04:04.324 [INFO][5812] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="edd091152171f7b069755787a194632b052918e3d75b084cdb4e231d1ed5c062" iface="eth0" netns="" Aug 13 00:04:04.362182 env[1583]: 2025-08-13 00:04:04.324 [INFO][5812] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="edd091152171f7b069755787a194632b052918e3d75b084cdb4e231d1ed5c062" Aug 13 00:04:04.362182 env[1583]: 2025-08-13 00:04:04.324 [INFO][5812] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="edd091152171f7b069755787a194632b052918e3d75b084cdb4e231d1ed5c062" Aug 13 00:04:04.362182 env[1583]: 2025-08-13 00:04:04.344 [INFO][5819] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="edd091152171f7b069755787a194632b052918e3d75b084cdb4e231d1ed5c062" HandleID="k8s-pod-network.edd091152171f7b069755787a194632b052918e3d75b084cdb4e231d1ed5c062" Workload="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5664c8f75b--pc5h2-eth0" Aug 13 00:04:04.362182 env[1583]: 2025-08-13 00:04:04.344 [INFO][5819] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:04:04.362182 env[1583]: 2025-08-13 00:04:04.345 [INFO][5819] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:04:04.362182 env[1583]: 2025-08-13 00:04:04.355 [WARNING][5819] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="edd091152171f7b069755787a194632b052918e3d75b084cdb4e231d1ed5c062" HandleID="k8s-pod-network.edd091152171f7b069755787a194632b052918e3d75b084cdb4e231d1ed5c062" Workload="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5664c8f75b--pc5h2-eth0" Aug 13 00:04:04.362182 env[1583]: 2025-08-13 00:04:04.355 [INFO][5819] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="edd091152171f7b069755787a194632b052918e3d75b084cdb4e231d1ed5c062" HandleID="k8s-pod-network.edd091152171f7b069755787a194632b052918e3d75b084cdb4e231d1ed5c062" Workload="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5664c8f75b--pc5h2-eth0" Aug 13 00:04:04.362182 env[1583]: 2025-08-13 00:04:04.358 [INFO][5819] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:04:04.362182 env[1583]: 2025-08-13 00:04:04.360 [INFO][5812] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="edd091152171f7b069755787a194632b052918e3d75b084cdb4e231d1ed5c062" Aug 13 00:04:04.362182 env[1583]: time="2025-08-13T00:04:04.362128082Z" level=info msg="TearDown network for sandbox \"edd091152171f7b069755787a194632b052918e3d75b084cdb4e231d1ed5c062\" successfully" Aug 13 00:04:04.362182 env[1583]: time="2025-08-13T00:04:04.362159082Z" level=info msg="StopPodSandbox for \"edd091152171f7b069755787a194632b052918e3d75b084cdb4e231d1ed5c062\" returns successfully" Aug 13 00:04:04.363792 env[1583]: time="2025-08-13T00:04:04.363751682Z" level=info msg="RemovePodSandbox for \"edd091152171f7b069755787a194632b052918e3d75b084cdb4e231d1ed5c062\"" Aug 13 00:04:04.363891 env[1583]: time="2025-08-13T00:04:04.363796842Z" level=info msg="Forcibly stopping sandbox \"edd091152171f7b069755787a194632b052918e3d75b084cdb4e231d1ed5c062\"" Aug 13 00:04:04.435890 env[1583]: 2025-08-13 00:04:04.399 [WARNING][5833] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="edd091152171f7b069755787a194632b052918e3d75b084cdb4e231d1ed5c062" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5664c8f75b--pc5h2-eth0", GenerateName:"calico-apiserver-5664c8f75b-", Namespace:"calico-apiserver", SelfLink:"", UID:"28ea706e-5d40-433e-9ee6-62a5f96b1be1", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 3, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5664c8f75b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-a-dd293077f6", ContainerID:"22bfa0c9cdb21b36f93f79caefea5b39bd337d232a1ed708983c09ce93ce82c7", Pod:"calico-apiserver-5664c8f75b-pc5h2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.55.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali10cd96087d8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:04:04.435890 env[1583]: 2025-08-13 00:04:04.399 [INFO][5833] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="edd091152171f7b069755787a194632b052918e3d75b084cdb4e231d1ed5c062" Aug 13 00:04:04.435890 env[1583]: 2025-08-13 00:04:04.399 [INFO][5833] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="edd091152171f7b069755787a194632b052918e3d75b084cdb4e231d1ed5c062" iface="eth0" netns="" Aug 13 00:04:04.435890 env[1583]: 2025-08-13 00:04:04.399 [INFO][5833] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="edd091152171f7b069755787a194632b052918e3d75b084cdb4e231d1ed5c062" Aug 13 00:04:04.435890 env[1583]: 2025-08-13 00:04:04.399 [INFO][5833] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="edd091152171f7b069755787a194632b052918e3d75b084cdb4e231d1ed5c062" Aug 13 00:04:04.435890 env[1583]: 2025-08-13 00:04:04.421 [INFO][5840] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="edd091152171f7b069755787a194632b052918e3d75b084cdb4e231d1ed5c062" HandleID="k8s-pod-network.edd091152171f7b069755787a194632b052918e3d75b084cdb4e231d1ed5c062" Workload="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5664c8f75b--pc5h2-eth0" Aug 13 00:04:04.435890 env[1583]: 2025-08-13 00:04:04.421 [INFO][5840] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:04:04.435890 env[1583]: 2025-08-13 00:04:04.421 [INFO][5840] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:04:04.435890 env[1583]: 2025-08-13 00:04:04.430 [WARNING][5840] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="edd091152171f7b069755787a194632b052918e3d75b084cdb4e231d1ed5c062" HandleID="k8s-pod-network.edd091152171f7b069755787a194632b052918e3d75b084cdb4e231d1ed5c062" Workload="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5664c8f75b--pc5h2-eth0" Aug 13 00:04:04.435890 env[1583]: 2025-08-13 00:04:04.430 [INFO][5840] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="edd091152171f7b069755787a194632b052918e3d75b084cdb4e231d1ed5c062" HandleID="k8s-pod-network.edd091152171f7b069755787a194632b052918e3d75b084cdb4e231d1ed5c062" Workload="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5664c8f75b--pc5h2-eth0" Aug 13 00:04:04.435890 env[1583]: 2025-08-13 00:04:04.432 [INFO][5840] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:04:04.435890 env[1583]: 2025-08-13 00:04:04.434 [INFO][5833] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="edd091152171f7b069755787a194632b052918e3d75b084cdb4e231d1ed5c062" Aug 13 00:04:04.436332 env[1583]: time="2025-08-13T00:04:04.435924732Z" level=info msg="TearDown network for sandbox \"edd091152171f7b069755787a194632b052918e3d75b084cdb4e231d1ed5c062\" successfully" Aug 13 00:04:04.442176 env[1583]: time="2025-08-13T00:04:04.442129093Z" level=info msg="RemovePodSandbox \"edd091152171f7b069755787a194632b052918e3d75b084cdb4e231d1ed5c062\" returns successfully" Aug 13 00:04:04.442879 env[1583]: time="2025-08-13T00:04:04.442850813Z" level=info msg="StopPodSandbox for \"28c6889dec5f17c11b47d3ed693d9afe879801706eb6a659afadea9406346a00\"" Aug 13 00:04:04.533610 env[1583]: 2025-08-13 00:04:04.484 [WARNING][5855] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="28c6889dec5f17c11b47d3ed693d9afe879801706eb6a659afadea9406346a00" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--a--dd293077f6-k8s-coredns--7c65d6cfc9--ktbwq-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"3816201b-b93a-4ec2-a67a-d16b5eed4f52", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 3, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-a-dd293077f6", ContainerID:"6c7a15c00d4b5b9367baf3a3abcc3ef71e3b826b78a90e7c716b25cd32c4c3e8", Pod:"coredns-7c65d6cfc9-ktbwq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.55.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif97246af908", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:04:04.533610 env[1583]: 2025-08-13 00:04:04.485 [INFO][5855] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="28c6889dec5f17c11b47d3ed693d9afe879801706eb6a659afadea9406346a00" Aug 13 00:04:04.533610 env[1583]: 2025-08-13 00:04:04.485 [INFO][5855] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="28c6889dec5f17c11b47d3ed693d9afe879801706eb6a659afadea9406346a00" iface="eth0" netns="" Aug 13 00:04:04.533610 env[1583]: 2025-08-13 00:04:04.485 [INFO][5855] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="28c6889dec5f17c11b47d3ed693d9afe879801706eb6a659afadea9406346a00" Aug 13 00:04:04.533610 env[1583]: 2025-08-13 00:04:04.485 [INFO][5855] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="28c6889dec5f17c11b47d3ed693d9afe879801706eb6a659afadea9406346a00" Aug 13 00:04:04.533610 env[1583]: 2025-08-13 00:04:04.508 [INFO][5862] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="28c6889dec5f17c11b47d3ed693d9afe879801706eb6a659afadea9406346a00" HandleID="k8s-pod-network.28c6889dec5f17c11b47d3ed693d9afe879801706eb6a659afadea9406346a00" Workload="ci--3510.3.8--a--dd293077f6-k8s-coredns--7c65d6cfc9--ktbwq-eth0" Aug 13 00:04:04.533610 env[1583]: 2025-08-13 00:04:04.508 [INFO][5862] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:04:04.533610 env[1583]: 2025-08-13 00:04:04.508 [INFO][5862] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:04:04.533610 env[1583]: 2025-08-13 00:04:04.524 [WARNING][5862] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="28c6889dec5f17c11b47d3ed693d9afe879801706eb6a659afadea9406346a00" HandleID="k8s-pod-network.28c6889dec5f17c11b47d3ed693d9afe879801706eb6a659afadea9406346a00" Workload="ci--3510.3.8--a--dd293077f6-k8s-coredns--7c65d6cfc9--ktbwq-eth0" Aug 13 00:04:04.533610 env[1583]: 2025-08-13 00:04:04.524 [INFO][5862] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="28c6889dec5f17c11b47d3ed693d9afe879801706eb6a659afadea9406346a00" HandleID="k8s-pod-network.28c6889dec5f17c11b47d3ed693d9afe879801706eb6a659afadea9406346a00" Workload="ci--3510.3.8--a--dd293077f6-k8s-coredns--7c65d6cfc9--ktbwq-eth0" Aug 13 00:04:04.533610 env[1583]: 2025-08-13 00:04:04.526 [INFO][5862] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:04:04.533610 env[1583]: 2025-08-13 00:04:04.529 [INFO][5855] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="28c6889dec5f17c11b47d3ed693d9afe879801706eb6a659afadea9406346a00" Aug 13 00:04:04.534183 env[1583]: time="2025-08-13T00:04:04.534145585Z" level=info msg="TearDown network for sandbox \"28c6889dec5f17c11b47d3ed693d9afe879801706eb6a659afadea9406346a00\" successfully" Aug 13 00:04:04.534256 env[1583]: time="2025-08-13T00:04:04.534239305Z" level=info msg="StopPodSandbox for \"28c6889dec5f17c11b47d3ed693d9afe879801706eb6a659afadea9406346a00\" returns successfully" Aug 13 00:04:04.535035 env[1583]: time="2025-08-13T00:04:04.535010345Z" level=info msg="RemovePodSandbox for \"28c6889dec5f17c11b47d3ed693d9afe879801706eb6a659afadea9406346a00\"" Aug 13 00:04:04.535181 env[1583]: time="2025-08-13T00:04:04.535142265Z" level=info msg="Forcibly stopping sandbox \"28c6889dec5f17c11b47d3ed693d9afe879801706eb6a659afadea9406346a00\"" Aug 13 00:04:04.607327 env[1583]: 2025-08-13 00:04:04.570 [WARNING][5876] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="28c6889dec5f17c11b47d3ed693d9afe879801706eb6a659afadea9406346a00" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--a--dd293077f6-k8s-coredns--7c65d6cfc9--ktbwq-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"3816201b-b93a-4ec2-a67a-d16b5eed4f52", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 3, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-a-dd293077f6", ContainerID:"6c7a15c00d4b5b9367baf3a3abcc3ef71e3b826b78a90e7c716b25cd32c4c3e8", Pod:"coredns-7c65d6cfc9-ktbwq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.55.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif97246af908", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:04:04.607327 env[1583]: 2025-08-13 00:04:04.570 [INFO][5876] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="28c6889dec5f17c11b47d3ed693d9afe879801706eb6a659afadea9406346a00" Aug 13 00:04:04.607327 env[1583]: 2025-08-13 00:04:04.570 [INFO][5876] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="28c6889dec5f17c11b47d3ed693d9afe879801706eb6a659afadea9406346a00" iface="eth0" netns="" Aug 13 00:04:04.607327 env[1583]: 2025-08-13 00:04:04.570 [INFO][5876] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="28c6889dec5f17c11b47d3ed693d9afe879801706eb6a659afadea9406346a00" Aug 13 00:04:04.607327 env[1583]: 2025-08-13 00:04:04.570 [INFO][5876] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="28c6889dec5f17c11b47d3ed693d9afe879801706eb6a659afadea9406346a00" Aug 13 00:04:04.607327 env[1583]: 2025-08-13 00:04:04.591 [INFO][5883] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="28c6889dec5f17c11b47d3ed693d9afe879801706eb6a659afadea9406346a00" HandleID="k8s-pod-network.28c6889dec5f17c11b47d3ed693d9afe879801706eb6a659afadea9406346a00" Workload="ci--3510.3.8--a--dd293077f6-k8s-coredns--7c65d6cfc9--ktbwq-eth0" Aug 13 00:04:04.607327 env[1583]: 2025-08-13 00:04:04.591 [INFO][5883] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:04:04.607327 env[1583]: 2025-08-13 00:04:04.591 [INFO][5883] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:04:04.607327 env[1583]: 2025-08-13 00:04:04.602 [WARNING][5883] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="28c6889dec5f17c11b47d3ed693d9afe879801706eb6a659afadea9406346a00" HandleID="k8s-pod-network.28c6889dec5f17c11b47d3ed693d9afe879801706eb6a659afadea9406346a00" Workload="ci--3510.3.8--a--dd293077f6-k8s-coredns--7c65d6cfc9--ktbwq-eth0" Aug 13 00:04:04.607327 env[1583]: 2025-08-13 00:04:04.602 [INFO][5883] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="28c6889dec5f17c11b47d3ed693d9afe879801706eb6a659afadea9406346a00" HandleID="k8s-pod-network.28c6889dec5f17c11b47d3ed693d9afe879801706eb6a659afadea9406346a00" Workload="ci--3510.3.8--a--dd293077f6-k8s-coredns--7c65d6cfc9--ktbwq-eth0" Aug 13 00:04:04.607327 env[1583]: 2025-08-13 00:04:04.604 [INFO][5883] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:04:04.607327 env[1583]: 2025-08-13 00:04:04.605 [INFO][5876] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="28c6889dec5f17c11b47d3ed693d9afe879801706eb6a659afadea9406346a00" Aug 13 00:04:04.607818 env[1583]: time="2025-08-13T00:04:04.607365395Z" level=info msg="TearDown network for sandbox \"28c6889dec5f17c11b47d3ed693d9afe879801706eb6a659afadea9406346a00\" successfully" Aug 13 00:04:04.613739 env[1583]: time="2025-08-13T00:04:04.613608636Z" level=info msg="RemovePodSandbox \"28c6889dec5f17c11b47d3ed693d9afe879801706eb6a659afadea9406346a00\" returns successfully" Aug 13 00:04:34.531648 update_engine[1568]: I0813 00:04:34.531522 1568 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Aug 13 00:04:34.531648 update_engine[1568]: I0813 00:04:34.531560 1568 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Aug 13 00:04:34.533003 update_engine[1568]: I0813 00:04:34.532718 1568 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Aug 13 00:04:34.533409 update_engine[1568]: I0813 00:04:34.533342 1568 omaha_request_params.cc:62] Current group set to lts Aug 13 00:04:34.534399 update_engine[1568]: I0813 00:04:34.534214 1568 update_attempter.cc:499] Already updated boot flags. Skipping. Aug 13 00:04:34.534399 update_engine[1568]: I0813 00:04:34.534227 1568 update_attempter.cc:643] Scheduling an action processor start. Aug 13 00:04:34.534399 update_engine[1568]: I0813 00:04:34.534243 1568 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Aug 13 00:04:34.534399 update_engine[1568]: I0813 00:04:34.534276 1568 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Aug 13 00:04:34.537548 locksmithd[1669]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Aug 13 00:04:34.549090 update_engine[1568]: I0813 00:04:34.548982 1568 omaha_request_action.cc:270] Posting an Omaha request to disabled Aug 13 00:04:34.549090 update_engine[1568]: I0813 00:04:34.549010 1568 omaha_request_action.cc:271] Request: Aug 13 00:04:34.549090 update_engine[1568]: Aug 13 00:04:34.549090 update_engine[1568]: Aug 13 00:04:34.549090 update_engine[1568]: Aug 13 00:04:34.549090 update_engine[1568]: Aug 13 00:04:34.549090 update_engine[1568]: Aug 13 00:04:34.549090 update_engine[1568]: Aug 13 00:04:34.549090 update_engine[1568]: Aug 13 00:04:34.549090 update_engine[1568]: Aug 13 00:04:34.549090 update_engine[1568]: I0813 00:04:34.549016 1568 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 00:04:34.610372 update_engine[1568]: I0813 00:04:34.610040 1568 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 00:04:34.610372 update_engine[1568]: I0813 00:04:34.610328 1568 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 00:04:34.645230 update_engine[1568]: E0813 00:04:34.645059 1568 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 00:04:34.645230 update_engine[1568]: I0813 00:04:34.645186 1568 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Aug 13 00:04:35.973962 kubelet[2655]: I0813 00:04:35.973915 2655 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:04:36.065642 env[1583]: time="2025-08-13T00:04:36.065593025Z" level=info msg="StopContainer for \"3849fecb990642e64d6223f5bfbce031ab33add9f1940853d1954f5a42761c94\" with timeout 30 (s)" Aug 13 00:04:36.066064 env[1583]: time="2025-08-13T00:04:36.065955665Z" level=info msg="Stop container \"3849fecb990642e64d6223f5bfbce031ab33add9f1940853d1954f5a42761c94\" with signal terminated" Aug 13 00:04:36.114000 audit[5980]: NETFILTER_CFG table=filter:141 family=2 entries=8 op=nft_register_rule pid=5980 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:04:36.120936 kernel: kauditd_printk_skb: 2 callbacks suppressed Aug 13 00:04:36.121062 kernel: audit: type=1325 audit(1755043476.114:451): table=filter:141 family=2 entries=8 op=nft_register_rule pid=5980 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:04:36.114000 audit[5980]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3016 a0=3 a1=fffff8f86e00 a2=0 a3=1 items=0 ppid=2755 pid=5980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:04:36.171160 kernel: audit: type=1300 audit(1755043476.114:451): arch=c00000b7 syscall=211 success=yes exit=3016 a0=3 a1=fffff8f86e00 a2=0 a3=1 items=0 ppid=2755 pid=5980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:04:36.114000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:04:36.193365 kernel: audit: type=1327 audit(1755043476.114:451): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:04:36.195139 kubelet[2655]: I0813 00:04:36.195021 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6205eef5-0f95-43ed-8fda-9f3dae34e07d-calico-apiserver-certs\") pod \"calico-apiserver-5664c8f75b-mfw4w\" (UID: \"6205eef5-0f95-43ed-8fda-9f3dae34e07d\") " pod="calico-apiserver/calico-apiserver-5664c8f75b-mfw4w" Aug 13 00:04:36.195139 kubelet[2655]: I0813 00:04:36.195090 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxk8c\" (UniqueName: \"kubernetes.io/projected/6205eef5-0f95-43ed-8fda-9f3dae34e07d-kube-api-access-sxk8c\") pod \"calico-apiserver-5664c8f75b-mfw4w\" (UID: \"6205eef5-0f95-43ed-8fda-9f3dae34e07d\") " pod="calico-apiserver/calico-apiserver-5664c8f75b-mfw4w" Aug 13 00:04:36.216000 audit[5980]: NETFILTER_CFG table=nat:142 family=2 entries=44 op=nft_register_chain pid=5980 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:04:36.216000 audit[5980]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14660 a0=3 a1=fffff8f86e00 a2=0 a3=1 items=0 ppid=2755 pid=5980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:04:36.277505 kernel: audit: type=1325 audit(1755043476.216:452): table=nat:142 family=2 entries=44 op=nft_register_chain pid=5980 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:04:36.277631 kernel: audit: type=1300 audit(1755043476.216:452): arch=c00000b7 syscall=211 success=yes exit=14660 a0=3 a1=fffff8f86e00 a2=0 a3=1 items=0 ppid=2755 pid=5980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:04:36.285389 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3849fecb990642e64d6223f5bfbce031ab33add9f1940853d1954f5a42761c94-rootfs.mount: Deactivated successfully. Aug 13 00:04:36.287087 env[1583]: time="2025-08-13T00:04:36.286039407Z" level=info msg="shim disconnected" id=3849fecb990642e64d6223f5bfbce031ab33add9f1940853d1954f5a42761c94 Aug 13 00:04:36.287087 env[1583]: time="2025-08-13T00:04:36.286172207Z" level=warning msg="cleaning up after shim disconnected" id=3849fecb990642e64d6223f5bfbce031ab33add9f1940853d1954f5a42761c94 namespace=k8s.io Aug 13 00:04:36.287087 env[1583]: time="2025-08-13T00:04:36.286181967Z" level=info msg="cleaning up dead shim" Aug 13 00:04:36.216000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:04:36.301852 kernel: audit: type=1327 audit(1755043476.216:452): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:04:36.317854 env[1583]: time="2025-08-13T00:04:36.317806771Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:04:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6004 runtime=io.containerd.runc.v2\n" Aug 13 00:04:36.334000 audit[6002]: NETFILTER_CFG table=filter:143 family=2 entries=8 op=nft_register_rule pid=6002 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:04:36.334000 audit[6002]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3016 a0=3 a1=ffffc6e67d20 a2=0 a3=1 items=0 ppid=2755 pid=6002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:04:36.386762 kernel: audit: type=1325 audit(1755043476.334:453): table=filter:143 family=2 entries=8 op=nft_register_rule pid=6002 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:04:36.386841 kernel: audit: type=1300 audit(1755043476.334:453): arch=c00000b7 syscall=211 success=yes exit=3016 a0=3 a1=ffffc6e67d20 a2=0 a3=1 items=0 ppid=2755 pid=6002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:04:36.334000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:04:36.402745 kernel: audit: type=1327 audit(1755043476.334:453): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:04:36.385000 audit[6002]: NETFILTER_CFG table=nat:144 family=2 entries=44 op=nft_unregister_chain pid=6002 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:04:36.422547 kernel: audit: type=1325 audit(1755043476.385:454): table=nat:144 family=2 entries=44 op=nft_unregister_chain pid=6002 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:04:36.385000 audit[6002]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=12900 a0=3 a1=ffffc6e67d20 a2=0 a3=1 items=0 ppid=2755 pid=6002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:04:36.385000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:04:36.434376 env[1583]: time="2025-08-13T00:04:36.434333623Z" level=info msg="StopContainer for \"3849fecb990642e64d6223f5bfbce031ab33add9f1940853d1954f5a42761c94\" returns successfully" Aug 13 00:04:36.435229 env[1583]: time="2025-08-13T00:04:36.435196583Z" level=info msg="StopPodSandbox for \"6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a\"" Aug 13 00:04:36.435427 env[1583]: time="2025-08-13T00:04:36.435400423Z" level=info msg="Container to stop \"3849fecb990642e64d6223f5bfbce031ab33add9f1940853d1954f5a42761c94\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:04:36.439920 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a-shm.mount: Deactivated successfully. Aug 13 00:04:36.476471 env[1583]: time="2025-08-13T00:04:36.476434747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5664c8f75b-mfw4w,Uid:6205eef5-0f95-43ed-8fda-9f3dae34e07d,Namespace:calico-apiserver,Attempt:0,}" Aug 13 00:04:36.482290 env[1583]: time="2025-08-13T00:04:36.481645908Z" level=info msg="shim disconnected" id=6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a Aug 13 00:04:36.482449 env[1583]: time="2025-08-13T00:04:36.482428348Z" level=warning msg="cleaning up after shim disconnected" id=6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a namespace=k8s.io Aug 13 00:04:36.482526 env[1583]: time="2025-08-13T00:04:36.482501748Z" level=info msg="cleaning up dead shim" Aug 13 00:04:36.499985 env[1583]: time="2025-08-13T00:04:36.499949069Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:04:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6041 runtime=io.containerd.runc.v2\n" Aug 13 00:04:36.607900 systemd-networkd[1759]: cali48a074b18cb: Link DOWN Aug 13 00:04:36.607907 systemd-networkd[1759]: cali48a074b18cb: Lost carrier Aug 13 00:04:36.653000 audit[6093]: NETFILTER_CFG table=filter:145 family=2 entries=59 op=nft_register_rule pid=6093 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:04:36.653000 audit[6093]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=10132 a0=3 a1=ffffd1b5f210 a2=0 a3=ffff97d49fa8 items=0 ppid=3940 pid=6093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:04:36.653000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:04:36.654000 audit[6093]: NETFILTER_CFG table=filter:146 family=2 entries=2 op=nft_unregister_chain pid=6093 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:04:36.654000 audit[6093]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=304 a0=3 a1=ffffd1b5f210 a2=0 a3=ffff97d49fa8 items=0 ppid=3940 pid=6093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:04:36.654000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:04:36.797650 env[1583]: 2025-08-13 00:04:36.604 [INFO][6073] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a" Aug 13 00:04:36.797650 env[1583]: 2025-08-13 00:04:36.605 [INFO][6073] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a" iface="eth0" netns="/var/run/netns/cni-18941cfb-1edf-b639-313c-0cc2f11e7591" Aug 13 00:04:36.797650 env[1583]: 2025-08-13 00:04:36.606 [INFO][6073] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a" iface="eth0" netns="/var/run/netns/cni-18941cfb-1edf-b639-313c-0cc2f11e7591" Aug 13 00:04:36.797650 env[1583]: 2025-08-13 00:04:36.620 [INFO][6073] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a" after=14.759042ms iface="eth0" netns="/var/run/netns/cni-18941cfb-1edf-b639-313c-0cc2f11e7591" Aug 13 00:04:36.797650 env[1583]: 2025-08-13 00:04:36.621 [INFO][6073] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a" Aug 13 00:04:36.797650 env[1583]: 2025-08-13 00:04:36.621 [INFO][6073] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a" Aug 13 00:04:36.797650 env[1583]: 2025-08-13 00:04:36.662 [INFO][6092] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a" HandleID="k8s-pod-network.6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a" Workload="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--rqg7k-eth0" Aug 13 00:04:36.797650 env[1583]: 2025-08-13 00:04:36.662 [INFO][6092] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:04:36.797650 env[1583]: 2025-08-13 00:04:36.662 [INFO][6092] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:04:36.797650 env[1583]: 2025-08-13 00:04:36.787 [INFO][6092] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a" HandleID="k8s-pod-network.6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a" Workload="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--rqg7k-eth0" Aug 13 00:04:36.797650 env[1583]: 2025-08-13 00:04:36.787 [INFO][6092] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a" HandleID="k8s-pod-network.6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a" Workload="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--rqg7k-eth0" Aug 13 00:04:36.797650 env[1583]: 2025-08-13 00:04:36.790 [INFO][6092] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:04:36.797650 env[1583]: 2025-08-13 00:04:36.791 [INFO][6073] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a" Aug 13 00:04:36.797650 env[1583]: time="2025-08-13T00:04:36.796974820Z" level=info msg="TearDown network for sandbox \"6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a\" successfully" Aug 13 00:04:36.797650 env[1583]: time="2025-08-13T00:04:36.797010340Z" level=info msg="StopPodSandbox for \"6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a\" returns successfully" Aug 13 00:04:36.902915 systemd-networkd[1759]: cali5c8ab1e510d: Link UP Aug 13 00:04:36.903139 systemd-networkd[1759]: cali5c8ab1e510d: Gained carrier Aug 13 00:04:36.904272 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali5c8ab1e510d: link becomes ready Aug 13 00:04:36.915553 env[1583]: 2025-08-13 00:04:36.593 [INFO][6055] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5664c8f75b--mfw4w-eth0 calico-apiserver-5664c8f75b- calico-apiserver 6205eef5-0f95-43ed-8fda-9f3dae34e07d 1230 0 2025-08-13 00:04:36 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5664c8f75b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510.3.8-a-dd293077f6 calico-apiserver-5664c8f75b-mfw4w eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5c8ab1e510d [] [] }} ContainerID="90291a17d204eeb33ea83ba7da2dbe1ff2a4bc8e243545be4f5324cb0a92270b" Namespace="calico-apiserver" Pod="calico-apiserver-5664c8f75b-mfw4w" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5664c8f75b--mfw4w-" Aug 13 00:04:36.915553 env[1583]: 2025-08-13 00:04:36.593 [INFO][6055] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="90291a17d204eeb33ea83ba7da2dbe1ff2a4bc8e243545be4f5324cb0a92270b" Namespace="calico-apiserver" Pod="calico-apiserver-5664c8f75b-mfw4w" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5664c8f75b--mfw4w-eth0" Aug 13 00:04:36.915553 env[1583]: 2025-08-13 00:04:36.717 [INFO][6085] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="90291a17d204eeb33ea83ba7da2dbe1ff2a4bc8e243545be4f5324cb0a92270b" HandleID="k8s-pod-network.90291a17d204eeb33ea83ba7da2dbe1ff2a4bc8e243545be4f5324cb0a92270b" Workload="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5664c8f75b--mfw4w-eth0" Aug 13 00:04:36.915553 env[1583]: 2025-08-13 00:04:36.717 [INFO][6085] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="90291a17d204eeb33ea83ba7da2dbe1ff2a4bc8e243545be4f5324cb0a92270b" HandleID="k8s-pod-network.90291a17d204eeb33ea83ba7da2dbe1ff2a4bc8e243545be4f5324cb0a92270b" Workload="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5664c8f75b--mfw4w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b610), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510.3.8-a-dd293077f6", "pod":"calico-apiserver-5664c8f75b-mfw4w", "timestamp":"2025-08-13 00:04:36.717043132 +0000 UTC"}, Hostname:"ci-3510.3.8-a-dd293077f6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:04:36.915553 env[1583]: 2025-08-13 00:04:36.717 [INFO][6085] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:04:36.915553 env[1583]: 2025-08-13 00:04:36.790 [INFO][6085] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:04:36.915553 env[1583]: 2025-08-13 00:04:36.790 [INFO][6085] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-a-dd293077f6' Aug 13 00:04:36.915553 env[1583]: 2025-08-13 00:04:36.823 [INFO][6085] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.90291a17d204eeb33ea83ba7da2dbe1ff2a4bc8e243545be4f5324cb0a92270b" host="ci-3510.3.8-a-dd293077f6" Aug 13 00:04:36.915553 env[1583]: 2025-08-13 00:04:36.836 [INFO][6085] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-a-dd293077f6" Aug 13 00:04:36.915553 env[1583]: 2025-08-13 00:04:36.848 [INFO][6085] ipam/ipam.go 511: Trying affinity for 192.168.55.64/26 host="ci-3510.3.8-a-dd293077f6" Aug 13 00:04:36.915553 env[1583]: 2025-08-13 00:04:36.851 [INFO][6085] ipam/ipam.go 158: Attempting to load block cidr=192.168.55.64/26 host="ci-3510.3.8-a-dd293077f6" Aug 13 00:04:36.915553 env[1583]: 2025-08-13 00:04:36.855 [INFO][6085] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.55.64/26 host="ci-3510.3.8-a-dd293077f6" Aug 13 00:04:36.915553 env[1583]: 2025-08-13 00:04:36.855 [INFO][6085] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.55.64/26 handle="k8s-pod-network.90291a17d204eeb33ea83ba7da2dbe1ff2a4bc8e243545be4f5324cb0a92270b" host="ci-3510.3.8-a-dd293077f6" Aug 13 00:04:36.915553 env[1583]: 2025-08-13 00:04:36.860 [INFO][6085] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.90291a17d204eeb33ea83ba7da2dbe1ff2a4bc8e243545be4f5324cb0a92270b Aug 13 00:04:36.915553 env[1583]: 2025-08-13 00:04:36.868 [INFO][6085] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.55.64/26 handle="k8s-pod-network.90291a17d204eeb33ea83ba7da2dbe1ff2a4bc8e243545be4f5324cb0a92270b" host="ci-3510.3.8-a-dd293077f6" Aug 13 00:04:36.915553 env[1583]: 2025-08-13 00:04:36.878 [INFO][6085] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.55.74/26] block=192.168.55.64/26 handle="k8s-pod-network.90291a17d204eeb33ea83ba7da2dbe1ff2a4bc8e243545be4f5324cb0a92270b" host="ci-3510.3.8-a-dd293077f6" Aug 13 00:04:36.915553 env[1583]: 2025-08-13 00:04:36.878 [INFO][6085] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.55.74/26] handle="k8s-pod-network.90291a17d204eeb33ea83ba7da2dbe1ff2a4bc8e243545be4f5324cb0a92270b" host="ci-3510.3.8-a-dd293077f6" Aug 13 00:04:36.915553 env[1583]: 2025-08-13 00:04:36.878 [INFO][6085] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:04:36.915553 env[1583]: 2025-08-13 00:04:36.878 [INFO][6085] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.55.74/26] IPv6=[] ContainerID="90291a17d204eeb33ea83ba7da2dbe1ff2a4bc8e243545be4f5324cb0a92270b" HandleID="k8s-pod-network.90291a17d204eeb33ea83ba7da2dbe1ff2a4bc8e243545be4f5324cb0a92270b" Workload="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5664c8f75b--mfw4w-eth0" Aug 13 00:04:36.916177 env[1583]: 2025-08-13 00:04:36.880 [INFO][6055] cni-plugin/k8s.go 418: Populated endpoint ContainerID="90291a17d204eeb33ea83ba7da2dbe1ff2a4bc8e243545be4f5324cb0a92270b" Namespace="calico-apiserver" Pod="calico-apiserver-5664c8f75b-mfw4w" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5664c8f75b--mfw4w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5664c8f75b--mfw4w-eth0", GenerateName:"calico-apiserver-5664c8f75b-", Namespace:"calico-apiserver", SelfLink:"", UID:"6205eef5-0f95-43ed-8fda-9f3dae34e07d", ResourceVersion:"1230", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 4, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5664c8f75b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-a-dd293077f6", ContainerID:"", Pod:"calico-apiserver-5664c8f75b-mfw4w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.55.74/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5c8ab1e510d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:04:36.916177 env[1583]: 2025-08-13 00:04:36.880 [INFO][6055] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.55.74/32] ContainerID="90291a17d204eeb33ea83ba7da2dbe1ff2a4bc8e243545be4f5324cb0a92270b" Namespace="calico-apiserver" Pod="calico-apiserver-5664c8f75b-mfw4w" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5664c8f75b--mfw4w-eth0" Aug 13 00:04:36.916177 env[1583]: 2025-08-13 00:04:36.880 [INFO][6055] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5c8ab1e510d ContainerID="90291a17d204eeb33ea83ba7da2dbe1ff2a4bc8e243545be4f5324cb0a92270b" Namespace="calico-apiserver" Pod="calico-apiserver-5664c8f75b-mfw4w" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5664c8f75b--mfw4w-eth0" Aug 13 00:04:36.916177 env[1583]: 2025-08-13 00:04:36.892 [INFO][6055] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="90291a17d204eeb33ea83ba7da2dbe1ff2a4bc8e243545be4f5324cb0a92270b" Namespace="calico-apiserver" Pod="calico-apiserver-5664c8f75b-mfw4w" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5664c8f75b--mfw4w-eth0" Aug 13 00:04:36.916177 env[1583]: 2025-08-13 00:04:36.892 [INFO][6055] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="90291a17d204eeb33ea83ba7da2dbe1ff2a4bc8e243545be4f5324cb0a92270b" Namespace="calico-apiserver" Pod="calico-apiserver-5664c8f75b-mfw4w" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5664c8f75b--mfw4w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5664c8f75b--mfw4w-eth0", GenerateName:"calico-apiserver-5664c8f75b-", Namespace:"calico-apiserver", SelfLink:"", UID:"6205eef5-0f95-43ed-8fda-9f3dae34e07d", ResourceVersion:"1230", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 4, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5664c8f75b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-a-dd293077f6", ContainerID:"90291a17d204eeb33ea83ba7da2dbe1ff2a4bc8e243545be4f5324cb0a92270b", Pod:"calico-apiserver-5664c8f75b-mfw4w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.55.74/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5c8ab1e510d", MAC:"8a:b5:7b:6b:2e:a3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:04:36.916177 env[1583]: 2025-08-13 00:04:36.913 [INFO][6055] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="90291a17d204eeb33ea83ba7da2dbe1ff2a4bc8e243545be4f5324cb0a92270b" Namespace="calico-apiserver" Pod="calico-apiserver-5664c8f75b-mfw4w" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5664c8f75b--mfw4w-eth0" Aug 13 00:04:36.940936 env[1583]: time="2025-08-13T00:04:36.936267274Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:04:36.940936 env[1583]: time="2025-08-13T00:04:36.936317994Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:04:36.940936 env[1583]: time="2025-08-13T00:04:36.936328074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:04:36.940936 env[1583]: time="2025-08-13T00:04:36.936443354Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/90291a17d204eeb33ea83ba7da2dbe1ff2a4bc8e243545be4f5324cb0a92270b pid=6140 runtime=io.containerd.runc.v2 Aug 13 00:04:36.949000 audit[6148]: NETFILTER_CFG table=filter:147 family=2 entries=67 op=nft_register_chain pid=6148 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:04:36.949000 audit[6148]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=31852 a0=3 a1=ffffd2272000 a2=0 a3=ffffa1acbfa8 items=0 ppid=3940 pid=6148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:04:36.949000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:04:37.011758 kubelet[2655]: I0813 00:04:37.010270 2655 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkvnd\" (UniqueName: \"kubernetes.io/projected/77ef7b45-3d42-4c7d-b3d2-5b91108fefb3-kube-api-access-dkvnd\") pod \"77ef7b45-3d42-4c7d-b3d2-5b91108fefb3\" (UID: \"77ef7b45-3d42-4c7d-b3d2-5b91108fefb3\") " Aug 13 00:04:37.011758 kubelet[2655]: I0813 00:04:37.010346 2655 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/77ef7b45-3d42-4c7d-b3d2-5b91108fefb3-calico-apiserver-certs\") pod \"77ef7b45-3d42-4c7d-b3d2-5b91108fefb3\" (UID: \"77ef7b45-3d42-4c7d-b3d2-5b91108fefb3\") " Aug 13 00:04:37.014085 kubelet[2655]: I0813 00:04:37.014041 2655 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77ef7b45-3d42-4c7d-b3d2-5b91108fefb3-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "77ef7b45-3d42-4c7d-b3d2-5b91108fefb3" (UID: "77ef7b45-3d42-4c7d-b3d2-5b91108fefb3"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 00:04:37.017799 kubelet[2655]: I0813 00:04:37.017770 2655 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77ef7b45-3d42-4c7d-b3d2-5b91108fefb3-kube-api-access-dkvnd" (OuterVolumeSpecName: "kube-api-access-dkvnd") pod "77ef7b45-3d42-4c7d-b3d2-5b91108fefb3" (UID: "77ef7b45-3d42-4c7d-b3d2-5b91108fefb3"). InnerVolumeSpecName "kube-api-access-dkvnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 00:04:37.022711 env[1583]: time="2025-08-13T00:04:37.022671323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5664c8f75b-mfw4w,Uid:6205eef5-0f95-43ed-8fda-9f3dae34e07d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"90291a17d204eeb33ea83ba7da2dbe1ff2a4bc8e243545be4f5324cb0a92270b\"" Aug 13 00:04:37.027429 env[1583]: time="2025-08-13T00:04:37.027390843Z" level=info msg="CreateContainer within sandbox \"90291a17d204eeb33ea83ba7da2dbe1ff2a4bc8e243545be4f5324cb0a92270b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 00:04:37.059509 env[1583]: time="2025-08-13T00:04:37.059466087Z" level=info msg="CreateContainer within sandbox \"90291a17d204eeb33ea83ba7da2dbe1ff2a4bc8e243545be4f5324cb0a92270b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"296c80e275f2cd99e57ea063a84eade5b4938b75d5ea939ff30793d899508e00\"" Aug 13 00:04:37.060513 env[1583]: time="2025-08-13T00:04:37.060473647Z" level=info msg="StartContainer for \"296c80e275f2cd99e57ea063a84eade5b4938b75d5ea939ff30793d899508e00\"" Aug 13 00:04:37.111365 kubelet[2655]: I0813 00:04:37.111217 2655 reconciler_common.go:293] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/77ef7b45-3d42-4c7d-b3d2-5b91108fefb3-calico-apiserver-certs\") on node \"ci-3510.3.8-a-dd293077f6\" DevicePath \"\"" Aug 13 00:04:37.111365 kubelet[2655]: I0813 00:04:37.111255 2655 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkvnd\" (UniqueName: \"kubernetes.io/projected/77ef7b45-3d42-4c7d-b3d2-5b91108fefb3-kube-api-access-dkvnd\") on node \"ci-3510.3.8-a-dd293077f6\" DevicePath \"\"" Aug 13 00:04:37.150487 env[1583]: time="2025-08-13T00:04:37.150435336Z" level=info msg="StartContainer for \"296c80e275f2cd99e57ea063a84eade5b4938b75d5ea939ff30793d899508e00\" returns successfully" Aug 13 00:04:37.204317 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a-rootfs.mount: Deactivated successfully. Aug 13 00:04:37.204463 systemd[1]: run-netns-cni\x2d18941cfb\x2d1edf\x2db639\x2d313c\x2d0cc2f11e7591.mount: Deactivated successfully. Aug 13 00:04:37.204555 systemd[1]: var-lib-kubelet-pods-77ef7b45\x2d3d42\x2d4c7d\x2db3d2\x2d5b91108fefb3-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Aug 13 00:04:37.204654 systemd[1]: var-lib-kubelet-pods-77ef7b45\x2d3d42\x2d4c7d\x2db3d2\x2d5b91108fefb3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddkvnd.mount: Deactivated successfully. Aug 13 00:04:37.325883 kubelet[2655]: I0813 00:04:37.325840 2655 scope.go:117] "RemoveContainer" containerID="3849fecb990642e64d6223f5bfbce031ab33add9f1940853d1954f5a42761c94" Aug 13 00:04:37.334565 env[1583]: time="2025-08-13T00:04:37.334531395Z" level=info msg="RemoveContainer for \"3849fecb990642e64d6223f5bfbce031ab33add9f1940853d1954f5a42761c94\"" Aug 13 00:04:37.349248 env[1583]: time="2025-08-13T00:04:37.349208676Z" level=info msg="RemoveContainer for \"3849fecb990642e64d6223f5bfbce031ab33add9f1940853d1954f5a42761c94\" returns successfully" Aug 13 00:04:37.353886 kubelet[2655]: I0813 00:04:37.353857 2655 scope.go:117] "RemoveContainer" containerID="3849fecb990642e64d6223f5bfbce031ab33add9f1940853d1954f5a42761c94" Aug 13 00:04:37.354404 env[1583]: time="2025-08-13T00:04:37.354282557Z" level=error msg="ContainerStatus for \"3849fecb990642e64d6223f5bfbce031ab33add9f1940853d1954f5a42761c94\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3849fecb990642e64d6223f5bfbce031ab33add9f1940853d1954f5a42761c94\": not found" Aug 13 00:04:37.355633 kubelet[2655]: E0813 00:04:37.355594 2655 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3849fecb990642e64d6223f5bfbce031ab33add9f1940853d1954f5a42761c94\": not found" containerID="3849fecb990642e64d6223f5bfbce031ab33add9f1940853d1954f5a42761c94" Aug 13 00:04:37.357019 kubelet[2655]: I0813 00:04:37.356977 2655 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3849fecb990642e64d6223f5bfbce031ab33add9f1940853d1954f5a42761c94"} err="failed to get container status \"3849fecb990642e64d6223f5bfbce031ab33add9f1940853d1954f5a42761c94\": rpc error: code = NotFound desc = an error occurred when try to find container \"3849fecb990642e64d6223f5bfbce031ab33add9f1940853d1954f5a42761c94\": not found" Aug 13 00:04:37.379552 kubelet[2655]: I0813 00:04:37.379497 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5664c8f75b-mfw4w" podStartSLOduration=1.379451959 podStartE2EDuration="1.379451959s" podCreationTimestamp="2025-08-13 00:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:04:37.363378958 +0000 UTC m=+95.042277704" watchObservedRunningTime="2025-08-13 00:04:37.379451959 +0000 UTC m=+95.058350665" Aug 13 00:04:37.403000 audit[6213]: NETFILTER_CFG table=filter:148 family=2 entries=8 op=nft_register_rule pid=6213 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:04:37.403000 audit[6213]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3016 a0=3 a1=fffffb3aaeb0 a2=0 a3=1 items=0 ppid=2755 pid=6213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:04:37.403000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:04:37.416000 audit[6213]: NETFILTER_CFG table=nat:149 family=2 entries=40 op=nft_register_rule pid=6213 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:04:37.416000 audit[6213]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=12772 a0=3 a1=fffffb3aaeb0 a2=0 a3=1 items=0 ppid=2755 pid=6213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:04:37.416000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:04:38.467000 audit[6215]: NETFILTER_CFG table=filter:150 family=2 entries=8 op=nft_register_rule pid=6215 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:04:38.467000 audit[6215]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3016 a0=3 a1=ffffd91d5950 a2=0 a3=1 items=0 ppid=2755 pid=6215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:04:38.467000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:04:38.472000 audit[6215]: NETFILTER_CFG table=nat:151 family=2 entries=26 op=nft_register_rule pid=6215 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:04:38.472000 audit[6215]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8076 a0=3 a1=ffffd91d5950 a2=0 a3=1 items=0 ppid=2755 pid=6215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:04:38.472000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:04:38.494166 kubelet[2655]: I0813 00:04:38.494133 2655 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77ef7b45-3d42-4c7d-b3d2-5b91108fefb3" path="/var/lib/kubelet/pods/77ef7b45-3d42-4c7d-b3d2-5b91108fefb3/volumes" Aug 13 00:04:38.797779 systemd-networkd[1759]: cali5c8ab1e510d: Gained IPv6LL Aug 13 00:04:39.330831 kubelet[2655]: I0813 00:04:39.330797 2655 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:04:39.765000 audit[6219]: NETFILTER_CFG table=filter:152 family=2 entries=8 op=nft_register_rule pid=6219 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:04:39.765000 audit[6219]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3016 a0=3 a1=ffffc06a8c80 a2=0 a3=1 items=0 ppid=2755 pid=6219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:04:39.765000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:04:39.772000 audit[6219]: NETFILTER_CFG table=nat:153 family=2 entries=44 op=nft_register_chain pid=6219 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:04:39.772000 audit[6219]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14660 a0=3 a1=ffffc06a8c80 a2=0 a3=1 items=0 ppid=2755 pid=6219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:04:39.772000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:04:39.777579 env[1583]: time="2025-08-13T00:04:39.777052962Z" level=info msg="StopContainer for \"728d6d3fcb2409211057b60f33ad7235eaf0bd635e55a7fa02567aaf24659400\" with timeout 30 (s)" Aug 13 00:04:39.777579 env[1583]: time="2025-08-13T00:04:39.777442962Z" level=info msg="Stop container \"728d6d3fcb2409211057b60f33ad7235eaf0bd635e55a7fa02567aaf24659400\" with signal terminated" Aug 13 00:04:39.855270 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-728d6d3fcb2409211057b60f33ad7235eaf0bd635e55a7fa02567aaf24659400-rootfs.mount: Deactivated successfully. Aug 13 00:04:39.858579 env[1583]: time="2025-08-13T00:04:39.858478410Z" level=info msg="shim disconnected" id=728d6d3fcb2409211057b60f33ad7235eaf0bd635e55a7fa02567aaf24659400 Aug 13 00:04:39.858742 env[1583]: time="2025-08-13T00:04:39.858722890Z" level=warning msg="cleaning up after shim disconnected" id=728d6d3fcb2409211057b60f33ad7235eaf0bd635e55a7fa02567aaf24659400 namespace=k8s.io Aug 13 00:04:39.858827 env[1583]: time="2025-08-13T00:04:39.858812530Z" level=info msg="cleaning up dead shim" Aug 13 00:04:39.874176 env[1583]: time="2025-08-13T00:04:39.874122172Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:04:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6239 runtime=io.containerd.runc.v2\n" Aug 13 00:04:39.927511 env[1583]: time="2025-08-13T00:04:39.927466817Z" level=info msg="StopContainer for \"728d6d3fcb2409211057b60f33ad7235eaf0bd635e55a7fa02567aaf24659400\" returns successfully" Aug 13 00:04:39.928263 env[1583]: time="2025-08-13T00:04:39.928221737Z" level=info msg="StopPodSandbox for \"bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b\"" Aug 13 00:04:39.928352 env[1583]: time="2025-08-13T00:04:39.928292577Z" level=info msg="Container to stop \"728d6d3fcb2409211057b60f33ad7235eaf0bd635e55a7fa02567aaf24659400\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:04:39.931411 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b-shm.mount: Deactivated successfully. Aug 13 00:04:39.977471 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b-rootfs.mount: Deactivated successfully. Aug 13 00:04:39.984902 env[1583]: time="2025-08-13T00:04:39.983026823Z" level=info msg="shim disconnected" id=bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b Aug 13 00:04:39.984902 env[1583]: time="2025-08-13T00:04:39.983085303Z" level=warning msg="cleaning up after shim disconnected" id=bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b namespace=k8s.io Aug 13 00:04:39.984902 env[1583]: time="2025-08-13T00:04:39.983094943Z" level=info msg="cleaning up dead shim" Aug 13 00:04:39.996970 env[1583]: time="2025-08-13T00:04:39.996915904Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:04:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6271 runtime=io.containerd.runc.v2\n" Aug 13 00:04:40.083328 systemd-networkd[1759]: cali2c3373c6db8: Link DOWN Aug 13 00:04:40.083334 systemd-networkd[1759]: cali2c3373c6db8: Lost carrier Aug 13 00:04:40.215167 env[1583]: 2025-08-13 00:04:40.082 [INFO][6296] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b" Aug 13 00:04:40.215167 env[1583]: 2025-08-13 00:04:40.082 [INFO][6296] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b" iface="eth0" netns="/var/run/netns/cni-e7a2adca-13b1-689a-be07-ac5e83d59f46" Aug 13 00:04:40.215167 env[1583]: 2025-08-13 00:04:40.082 [INFO][6296] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b" iface="eth0" netns="/var/run/netns/cni-e7a2adca-13b1-689a-be07-ac5e83d59f46" Aug 13 00:04:40.215167 env[1583]: 2025-08-13 00:04:40.093 [INFO][6296] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b" after=11.334921ms iface="eth0" netns="/var/run/netns/cni-e7a2adca-13b1-689a-be07-ac5e83d59f46" Aug 13 00:04:40.215167 env[1583]: 2025-08-13 00:04:40.093 [INFO][6296] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b" Aug 13 00:04:40.215167 env[1583]: 2025-08-13 00:04:40.093 [INFO][6296] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b" Aug 13 00:04:40.215167 env[1583]: 2025-08-13 00:04:40.138 [INFO][6306] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b" HandleID="k8s-pod-network.bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b" Workload="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--r5z5g-eth0" Aug 13 00:04:40.215167 env[1583]: 2025-08-13 00:04:40.138 [INFO][6306] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:04:40.215167 env[1583]: 2025-08-13 00:04:40.138 [INFO][6306] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:04:40.215167 env[1583]: 2025-08-13 00:04:40.210 [INFO][6306] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b" HandleID="k8s-pod-network.bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b" Workload="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--r5z5g-eth0" Aug 13 00:04:40.215167 env[1583]: 2025-08-13 00:04:40.210 [INFO][6306] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b" HandleID="k8s-pod-network.bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b" Workload="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--r5z5g-eth0" Aug 13 00:04:40.215167 env[1583]: 2025-08-13 00:04:40.212 [INFO][6306] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:04:40.215167 env[1583]: 2025-08-13 00:04:40.213 [INFO][6296] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b" Aug 13 00:04:40.222106 env[1583]: time="2025-08-13T00:04:40.215437046Z" level=info msg="TearDown network for sandbox \"bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b\" successfully" Aug 13 00:04:40.222106 env[1583]: time="2025-08-13T00:04:40.215474806Z" level=info msg="StopPodSandbox for \"bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b\" returns successfully" Aug 13 00:04:40.221135 systemd[1]: run-netns-cni\x2de7a2adca\x2d13b1\x2d689a\x2dbe07\x2dac5e83d59f46.mount: Deactivated successfully. Aug 13 00:04:40.232000 audit[6313]: NETFILTER_CFG table=filter:154 family=2 entries=55 op=nft_register_rule pid=6313 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:04:40.232000 audit[6313]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8928 a0=3 a1=fffff5723b30 a2=0 a3=ffff81087fa8 items=0 ppid=3940 pid=6313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:04:40.232000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:04:40.235000 audit[6313]: NETFILTER_CFG table=filter:155 family=2 entries=2 op=nft_unregister_chain pid=6313 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:04:40.235000 audit[6313]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=304 a0=3 a1=fffff5723b30 a2=0 a3=ffff81087fa8 items=0 ppid=3940 pid=6313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:04:40.235000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:04:40.266000 audit[6316]: NETFILTER_CFG table=filter:156 family=2 entries=8 op=nft_register_rule pid=6316 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:04:40.266000 audit[6316]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3016 a0=3 a1=ffffc75b9000 a2=0 a3=1 items=0 ppid=2755 pid=6316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:04:40.266000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:04:40.273000 audit[6316]: NETFILTER_CFG table=nat:157 family=2 entries=44 op=nft_unregister_chain pid=6316 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:04:40.273000 audit[6316]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=12900 a0=3 a1=ffffc75b9000 a2=0 a3=1 items=0 ppid=2755 pid=6316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:04:40.273000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:04:40.329240 kubelet[2655]: I0813 00:04:40.329196 2655 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pddd9\" (UniqueName: \"kubernetes.io/projected/87091bc5-d911-4547-82d0-decf534f50dd-kube-api-access-pddd9\") pod \"87091bc5-d911-4547-82d0-decf534f50dd\" (UID: \"87091bc5-d911-4547-82d0-decf534f50dd\") " Aug 13 00:04:40.329613 kubelet[2655]: I0813 00:04:40.329253 2655 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/87091bc5-d911-4547-82d0-decf534f50dd-calico-apiserver-certs\") pod \"87091bc5-d911-4547-82d0-decf534f50dd\" (UID: \"87091bc5-d911-4547-82d0-decf534f50dd\") " Aug 13 00:04:40.338528 systemd[1]: var-lib-kubelet-pods-87091bc5\x2dd911\x2d4547\x2d82d0\x2ddecf534f50dd-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Aug 13 00:04:40.341626 kubelet[2655]: I0813 00:04:40.341606 2655 scope.go:117] "RemoveContainer" containerID="728d6d3fcb2409211057b60f33ad7235eaf0bd635e55a7fa02567aaf24659400" Aug 13 00:04:40.343182 kubelet[2655]: I0813 00:04:40.343146 2655 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87091bc5-d911-4547-82d0-decf534f50dd-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "87091bc5-d911-4547-82d0-decf534f50dd" (UID: "87091bc5-d911-4547-82d0-decf534f50dd"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 00:04:40.344220 kubelet[2655]: I0813 00:04:40.343845 2655 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87091bc5-d911-4547-82d0-decf534f50dd-kube-api-access-pddd9" (OuterVolumeSpecName: "kube-api-access-pddd9") pod "87091bc5-d911-4547-82d0-decf534f50dd" (UID: "87091bc5-d911-4547-82d0-decf534f50dd"). InnerVolumeSpecName "kube-api-access-pddd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 00:04:40.347643 env[1583]: time="2025-08-13T00:04:40.346808299Z" level=info msg="RemoveContainer for \"728d6d3fcb2409211057b60f33ad7235eaf0bd635e55a7fa02567aaf24659400\"" Aug 13 00:04:40.353508 env[1583]: time="2025-08-13T00:04:40.353464740Z" level=info msg="RemoveContainer for \"728d6d3fcb2409211057b60f33ad7235eaf0bd635e55a7fa02567aaf24659400\" returns successfully" Aug 13 00:04:40.353764 kubelet[2655]: I0813 00:04:40.353740 2655 scope.go:117] "RemoveContainer" containerID="728d6d3fcb2409211057b60f33ad7235eaf0bd635e55a7fa02567aaf24659400" Aug 13 00:04:40.354015 env[1583]: time="2025-08-13T00:04:40.353963180Z" level=error msg="ContainerStatus for \"728d6d3fcb2409211057b60f33ad7235eaf0bd635e55a7fa02567aaf24659400\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"728d6d3fcb2409211057b60f33ad7235eaf0bd635e55a7fa02567aaf24659400\": not found" Aug 13 00:04:40.354206 kubelet[2655]: E0813 00:04:40.354173 2655 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"728d6d3fcb2409211057b60f33ad7235eaf0bd635e55a7fa02567aaf24659400\": not found" containerID="728d6d3fcb2409211057b60f33ad7235eaf0bd635e55a7fa02567aaf24659400" Aug 13 00:04:40.354283 kubelet[2655]: I0813 00:04:40.354209 2655 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"728d6d3fcb2409211057b60f33ad7235eaf0bd635e55a7fa02567aaf24659400"} err="failed to get container status \"728d6d3fcb2409211057b60f33ad7235eaf0bd635e55a7fa02567aaf24659400\": rpc error: code = NotFound desc = an error occurred when try to find container \"728d6d3fcb2409211057b60f33ad7235eaf0bd635e55a7fa02567aaf24659400\": not found" Aug 13 00:04:40.429987 kubelet[2655]: I0813 00:04:40.429931 2655 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pddd9\" (UniqueName: \"kubernetes.io/projected/87091bc5-d911-4547-82d0-decf534f50dd-kube-api-access-pddd9\") on node \"ci-3510.3.8-a-dd293077f6\" DevicePath \"\"" Aug 13 00:04:40.429987 kubelet[2655]: I0813 00:04:40.429962 2655 reconciler_common.go:293] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/87091bc5-d911-4547-82d0-decf534f50dd-calico-apiserver-certs\") on node \"ci-3510.3.8-a-dd293077f6\" DevicePath \"\"" Aug 13 00:04:40.857591 systemd[1]: var-lib-kubelet-pods-87091bc5\x2dd911\x2d4547\x2d82d0\x2ddecf534f50dd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpddd9.mount: Deactivated successfully. Aug 13 00:04:42.494653 kubelet[2655]: I0813 00:04:42.494598 2655 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87091bc5-d911-4547-82d0-decf534f50dd" path="/var/lib/kubelet/pods/87091bc5-d911-4547-82d0-decf534f50dd/volumes" Aug 13 00:04:44.464761 update_engine[1568]: I0813 00:04:44.464706 1568 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 00:04:44.465119 update_engine[1568]: I0813 00:04:44.464912 1568 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 00:04:44.465119 update_engine[1568]: I0813 00:04:44.465093 1568 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 00:04:44.499354 update_engine[1568]: E0813 00:04:44.499305 1568 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 00:04:44.499500 update_engine[1568]: I0813 00:04:44.499410 1568 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Aug 13 00:04:54.463977 update_engine[1568]: I0813 00:04:54.463932 1568 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 00:04:54.464333 update_engine[1568]: I0813 00:04:54.464134 1568 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 00:04:54.464333 update_engine[1568]: I0813 00:04:54.464287 1568 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 00:04:54.566855 update_engine[1568]: E0813 00:04:54.566803 1568 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 00:04:54.566983 update_engine[1568]: I0813 00:04:54.566952 1568 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Aug 13 00:05:04.464745 update_engine[1568]: I0813 00:05:04.464699 1568 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 00:05:04.465099 update_engine[1568]: I0813 00:05:04.464908 1568 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 00:05:04.465099 update_engine[1568]: I0813 00:05:04.465083 1568 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 00:05:04.473865 update_engine[1568]: E0813 00:05:04.473817 1568 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 00:05:04.474017 update_engine[1568]: I0813 00:05:04.473964 1568 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Aug 13 00:05:04.474017 update_engine[1568]: I0813 00:05:04.473975 1568 omaha_request_action.cc:621] Omaha request response: Aug 13 00:05:04.474103 update_engine[1568]: E0813 00:05:04.474081 1568 omaha_request_action.cc:640] Omaha request network transfer failed. Aug 13 00:05:04.474135 update_engine[1568]: I0813 00:05:04.474104 1568 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Aug 13 00:05:04.474135 update_engine[1568]: I0813 00:05:04.474108 1568 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Aug 13 00:05:04.474135 update_engine[1568]: I0813 00:05:04.474110 1568 update_attempter.cc:306] Processing Done. Aug 13 00:05:04.474135 update_engine[1568]: E0813 00:05:04.474134 1568 update_attempter.cc:619] Update failed. Aug 13 00:05:04.474232 update_engine[1568]: I0813 00:05:04.474138 1568 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Aug 13 00:05:04.474232 update_engine[1568]: I0813 00:05:04.474141 1568 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Aug 13 00:05:04.474232 update_engine[1568]: I0813 00:05:04.474145 1568 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Aug 13 00:05:04.474232 update_engine[1568]: I0813 00:05:04.474207 1568 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Aug 13 00:05:04.474232 update_engine[1568]: I0813 00:05:04.474227 1568 omaha_request_action.cc:270] Posting an Omaha request to disabled Aug 13 00:05:04.474232 update_engine[1568]: I0813 00:05:04.474231 1568 omaha_request_action.cc:271] Request: Aug 13 00:05:04.474232 update_engine[1568]: Aug 13 00:05:04.474232 update_engine[1568]: Aug 13 00:05:04.474232 update_engine[1568]: Aug 13 00:05:04.474232 update_engine[1568]: Aug 13 00:05:04.474232 update_engine[1568]: Aug 13 00:05:04.474232 update_engine[1568]: Aug 13 00:05:04.474232 update_engine[1568]: I0813 00:05:04.474236 1568 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 00:05:04.474489 update_engine[1568]: I0813 00:05:04.474361 1568 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 00:05:04.474651 update_engine[1568]: I0813 00:05:04.474534 1568 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 00:05:04.474858 locksmithd[1669]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Aug 13 00:05:04.485545 update_engine[1568]: E0813 00:05:04.485504 1568 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 00:05:04.485705 update_engine[1568]: I0813 00:05:04.485612 1568 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Aug 13 00:05:04.485705 update_engine[1568]: I0813 00:05:04.485618 1568 omaha_request_action.cc:621] Omaha request response: Aug 13 00:05:04.485705 update_engine[1568]: I0813 00:05:04.485623 1568 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Aug 13 00:05:04.485705 update_engine[1568]: I0813 00:05:04.485626 1568 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Aug 13 00:05:04.485705 update_engine[1568]: I0813 00:05:04.485630 1568 update_attempter.cc:306] Processing Done. Aug 13 00:05:04.485705 update_engine[1568]: I0813 00:05:04.485634 1568 update_attempter.cc:310] Error event sent. Aug 13 00:05:04.485705 update_engine[1568]: I0813 00:05:04.485642 1568 update_check_scheduler.cc:74] Next update check in 44m30s Aug 13 00:05:04.486190 locksmithd[1669]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Aug 13 00:05:04.621274 env[1583]: time="2025-08-13T00:05:04.621235410Z" level=info msg="StopPodSandbox for \"6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a\"" Aug 13 00:05:04.694044 env[1583]: 2025-08-13 00:05:04.657 [WARNING][6401] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--rqg7k-eth0" Aug 13 00:05:04.694044 env[1583]: 2025-08-13 00:05:04.657 [INFO][6401] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a" Aug 13 00:05:04.694044 env[1583]: 2025-08-13 00:05:04.657 [INFO][6401] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a" iface="eth0" netns="" Aug 13 00:05:04.694044 env[1583]: 2025-08-13 00:05:04.657 [INFO][6401] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a" Aug 13 00:05:04.694044 env[1583]: 2025-08-13 00:05:04.657 [INFO][6401] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a" Aug 13 00:05:04.694044 env[1583]: 2025-08-13 00:05:04.676 [INFO][6408] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a" HandleID="k8s-pod-network.6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a" Workload="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--rqg7k-eth0" Aug 13 00:05:04.694044 env[1583]: 2025-08-13 00:05:04.677 [INFO][6408] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:05:04.694044 env[1583]: 2025-08-13 00:05:04.677 [INFO][6408] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:05:04.694044 env[1583]: 2025-08-13 00:05:04.688 [WARNING][6408] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a" HandleID="k8s-pod-network.6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a" Workload="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--rqg7k-eth0" Aug 13 00:05:04.694044 env[1583]: 2025-08-13 00:05:04.688 [INFO][6408] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a" HandleID="k8s-pod-network.6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a" Workload="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--rqg7k-eth0" Aug 13 00:05:04.694044 env[1583]: 2025-08-13 00:05:04.690 [INFO][6408] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:05:04.694044 env[1583]: 2025-08-13 00:05:04.692 [INFO][6401] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a" Aug 13 00:05:04.694532 env[1583]: time="2025-08-13T00:05:04.694497230Z" level=info msg="TearDown network for sandbox \"6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a\" successfully" Aug 13 00:05:04.694601 env[1583]: time="2025-08-13T00:05:04.694584990Z" level=info msg="StopPodSandbox for \"6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a\" returns successfully" Aug 13 00:05:04.696311 env[1583]: time="2025-08-13T00:05:04.696279994Z" level=info msg="RemovePodSandbox for \"6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a\"" Aug 13 00:05:04.696497 env[1583]: time="2025-08-13T00:05:04.696457474Z" level=info msg="Forcibly stopping sandbox \"6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a\"" Aug 13 00:05:04.770035 env[1583]: 2025-08-13 00:05:04.731 [WARNING][6423] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--rqg7k-eth0" Aug 13 00:05:04.770035 env[1583]: 2025-08-13 00:05:04.732 [INFO][6423] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a" Aug 13 00:05:04.770035 env[1583]: 2025-08-13 00:05:04.732 [INFO][6423] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a" iface="eth0" netns="" Aug 13 00:05:04.770035 env[1583]: 2025-08-13 00:05:04.732 [INFO][6423] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a" Aug 13 00:05:04.770035 env[1583]: 2025-08-13 00:05:04.732 [INFO][6423] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a" Aug 13 00:05:04.770035 env[1583]: 2025-08-13 00:05:04.754 [INFO][6430] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a" HandleID="k8s-pod-network.6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a" Workload="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--rqg7k-eth0" Aug 13 00:05:04.770035 env[1583]: 2025-08-13 00:05:04.754 [INFO][6430] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:05:04.770035 env[1583]: 2025-08-13 00:05:04.754 [INFO][6430] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:05:04.770035 env[1583]: 2025-08-13 00:05:04.763 [WARNING][6430] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a" HandleID="k8s-pod-network.6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a" Workload="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--rqg7k-eth0" Aug 13 00:05:04.770035 env[1583]: 2025-08-13 00:05:04.763 [INFO][6430] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a" HandleID="k8s-pod-network.6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a" Workload="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--rqg7k-eth0" Aug 13 00:05:04.770035 env[1583]: 2025-08-13 00:05:04.765 [INFO][6430] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:05:04.770035 env[1583]: 2025-08-13 00:05:04.767 [INFO][6423] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a" Aug 13 00:05:04.770035 env[1583]: time="2025-08-13T00:05:04.768580251Z" level=info msg="TearDown network for sandbox \"6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a\" successfully" Aug 13 00:05:04.778338 env[1583]: time="2025-08-13T00:05:04.778256955Z" level=info msg="RemovePodSandbox \"6a273c8aa262d9953e95a887f4525b24a74f126906bd9b58a946058ad286816a\" returns successfully" Aug 13 00:05:04.778987 env[1583]: time="2025-08-13T00:05:04.778963836Z" level=info msg="StopPodSandbox for \"bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b\"" Aug 13 00:05:04.861550 env[1583]: 2025-08-13 00:05:04.822 [WARNING][6444] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--r5z5g-eth0" Aug 13 00:05:04.861550 env[1583]: 2025-08-13 00:05:04.822 [INFO][6444] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b" Aug 13 00:05:04.861550 env[1583]: 2025-08-13 00:05:04.822 [INFO][6444] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b" iface="eth0" netns="" Aug 13 00:05:04.861550 env[1583]: 2025-08-13 00:05:04.822 [INFO][6444] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b" Aug 13 00:05:04.861550 env[1583]: 2025-08-13 00:05:04.822 [INFO][6444] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b" Aug 13 00:05:04.861550 env[1583]: 2025-08-13 00:05:04.846 [INFO][6451] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b" HandleID="k8s-pod-network.bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b" Workload="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--r5z5g-eth0" Aug 13 00:05:04.861550 env[1583]: 2025-08-13 00:05:04.846 [INFO][6451] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:05:04.861550 env[1583]: 2025-08-13 00:05:04.846 [INFO][6451] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:05:04.861550 env[1583]: 2025-08-13 00:05:04.856 [WARNING][6451] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b" HandleID="k8s-pod-network.bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b" Workload="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--r5z5g-eth0" Aug 13 00:05:04.861550 env[1583]: 2025-08-13 00:05:04.856 [INFO][6451] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b" HandleID="k8s-pod-network.bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b" Workload="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--r5z5g-eth0" Aug 13 00:05:04.861550 env[1583]: 2025-08-13 00:05:04.858 [INFO][6451] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:05:04.861550 env[1583]: 2025-08-13 00:05:04.859 [INFO][6444] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b" Aug 13 00:05:04.862027 env[1583]: time="2025-08-13T00:05:04.861583359Z" level=info msg="TearDown network for sandbox \"bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b\" successfully" Aug 13 00:05:04.862027 env[1583]: time="2025-08-13T00:05:04.861614119Z" level=info msg="StopPodSandbox for \"bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b\" returns successfully" Aug 13 00:05:04.862242 env[1583]: time="2025-08-13T00:05:04.862071960Z" level=info msg="RemovePodSandbox for \"bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b\"" Aug 13 00:05:04.862242 env[1583]: time="2025-08-13T00:05:04.862106200Z" level=info msg="Forcibly stopping sandbox \"bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b\"" Aug 13 00:05:04.948508 env[1583]: 2025-08-13 00:05:04.907 [WARNING][6466] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b" WorkloadEndpoint="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--r5z5g-eth0" Aug 13 00:05:04.948508 env[1583]: 2025-08-13 00:05:04.908 [INFO][6466] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b" Aug 13 00:05:04.948508 env[1583]: 2025-08-13 00:05:04.908 [INFO][6466] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b" iface="eth0" netns="" Aug 13 00:05:04.948508 env[1583]: 2025-08-13 00:05:04.908 [INFO][6466] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b" Aug 13 00:05:04.948508 env[1583]: 2025-08-13 00:05:04.908 [INFO][6466] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b" Aug 13 00:05:04.948508 env[1583]: 2025-08-13 00:05:04.929 [INFO][6473] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b" HandleID="k8s-pod-network.bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b" Workload="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--r5z5g-eth0" Aug 13 00:05:04.948508 env[1583]: 2025-08-13 00:05:04.930 [INFO][6473] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:05:04.948508 env[1583]: 2025-08-13 00:05:04.930 [INFO][6473] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:05:04.948508 env[1583]: 2025-08-13 00:05:04.943 [WARNING][6473] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b" HandleID="k8s-pod-network.bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b" Workload="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--r5z5g-eth0" Aug 13 00:05:04.948508 env[1583]: 2025-08-13 00:05:04.943 [INFO][6473] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b" HandleID="k8s-pod-network.bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b" Workload="ci--3510.3.8--a--dd293077f6-k8s-calico--apiserver--5f97f8f466--r5z5g-eth0" Aug 13 00:05:04.948508 env[1583]: 2025-08-13 00:05:04.945 [INFO][6473] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:05:04.948508 env[1583]: 2025-08-13 00:05:04.946 [INFO][6466] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b" Aug 13 00:05:04.949053 env[1583]: time="2025-08-13T00:05:04.948553891Z" level=info msg="TearDown network for sandbox \"bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b\" successfully" Aug 13 00:05:04.956633 env[1583]: time="2025-08-13T00:05:04.956580671Z" level=info msg="RemovePodSandbox \"bc13898759a7fb9eb62d16cf8ddaf1050c74560f85371f8e93ede8502fa30f8b\" returns successfully" Aug 13 00:05:06.749303 systemd[1]: run-containerd-runc-k8s.io-154f461bd420d918710d70623b8ac785ac07b094ea66e514e860bd8144da402e-runc.Skro9w.mount: Deactivated successfully. Aug 13 00:05:25.647323 systemd[1]: Started sshd@7-10.200.20.35:22-10.200.16.10:56528.service. Aug 13 00:05:25.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.35:22-10.200.16.10:56528 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:05:25.653829 kernel: kauditd_printk_skb: 41 callbacks suppressed Aug 13 00:05:25.653955 kernel: audit: type=1130 audit(1755043525.646:468): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.35:22-10.200.16.10:56528 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:05:26.108000 audit[6575]: USER_ACCT pid=6575 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:26.108955 sshd[6575]: Accepted publickey for core from 10.200.16.10 port 56528 ssh2: RSA SHA256:J/je3NSfm2Jr+TQ4JtJfZPKSEEtI0uL9aC1/9TbPR4M Aug 13 00:05:26.110916 sshd[6575]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:05:26.109000 audit[6575]: CRED_ACQ pid=6575 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:26.163690 kernel: audit: type=1101 audit(1755043526.108:469): pid=6575 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:26.163804 kernel: audit: type=1103 audit(1755043526.109:470): pid=6575 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:26.166254 kernel: audit: type=1006 audit(1755043526.109:471): pid=6575 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Aug 13 00:05:26.169366 systemd[1]: Started session-10.scope. Aug 13 00:05:26.170449 systemd-logind[1566]: New session 10 of user core. Aug 13 00:05:26.109000 audit[6575]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe07a4520 a2=3 a3=1 items=0 ppid=1 pid=6575 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:05:26.207119 kernel: audit: type=1300 audit(1755043526.109:471): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe07a4520 a2=3 a3=1 items=0 ppid=1 pid=6575 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:05:26.217460 kernel: audit: type=1327 audit(1755043526.109:471): proctitle=737368643A20636F7265205B707269765D Aug 13 00:05:26.217539 kernel: audit: type=1105 audit(1755043526.184:472): pid=6575 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:26.109000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:05:26.184000 audit[6575]: USER_START pid=6575 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:26.209000 audit[6578]: CRED_ACQ pid=6578 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:26.270785 kernel: audit: type=1103 audit(1755043526.209:473): pid=6578 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:26.570898 sshd[6575]: pam_unix(sshd:session): session closed for user core Aug 13 00:05:26.571000 audit[6575]: USER_END pid=6575 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:26.574179 systemd[1]: sshd@7-10.200.20.35:22-10.200.16.10:56528.service: Deactivated successfully. Aug 13 00:05:26.575095 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 00:05:26.571000 audit[6575]: CRED_DISP pid=6575 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:26.600260 systemd-logind[1566]: Session 10 logged out. Waiting for processes to exit. Aug 13 00:05:26.623438 kernel: audit: type=1106 audit(1755043526.571:474): pid=6575 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:26.623569 kernel: audit: type=1104 audit(1755043526.571:475): pid=6575 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:26.573000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.35:22-10.200.16.10:56528 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:05:26.624134 systemd-logind[1566]: Removed session 10. Aug 13 00:05:31.658064 systemd[1]: Started sshd@8-10.200.20.35:22-10.200.16.10:59126.service. Aug 13 00:05:31.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.35:22-10.200.16.10:59126 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:05:31.664605 kernel: kauditd_printk_skb: 1 callbacks suppressed Aug 13 00:05:31.664717 kernel: audit: type=1130 audit(1755043531.658:477): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.35:22-10.200.16.10:59126 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:05:32.144000 audit[6589]: USER_ACCT pid=6589 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:32.154890 sshd[6589]: Accepted publickey for core from 10.200.16.10 port 59126 ssh2: RSA SHA256:J/je3NSfm2Jr+TQ4JtJfZPKSEEtI0uL9aC1/9TbPR4M Aug 13 00:05:32.158252 sshd[6589]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:05:32.157000 audit[6589]: CRED_ACQ pid=6589 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:32.193019 kernel: audit: type=1101 audit(1755043532.144:478): pid=6589 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:32.193148 kernel: audit: type=1103 audit(1755043532.157:479): pid=6589 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:32.200344 systemd-logind[1566]: New session 11 of user core. Aug 13 00:05:32.200846 systemd[1]: Started session-11.scope. Aug 13 00:05:32.215880 kernel: audit: type=1006 audit(1755043532.157:480): pid=6589 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Aug 13 00:05:32.157000 audit[6589]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdf4b46e0 a2=3 a3=1 items=0 ppid=1 pid=6589 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:05:32.252522 kernel: audit: type=1300 audit(1755043532.157:480): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdf4b46e0 a2=3 a3=1 items=0 ppid=1 pid=6589 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:05:32.157000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:05:32.265293 kernel: audit: type=1327 audit(1755043532.157:480): proctitle=737368643A20636F7265205B707269765D Aug 13 00:05:32.222000 audit[6589]: USER_START pid=6589 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:32.298591 kernel: audit: type=1105 audit(1755043532.222:481): pid=6589 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:32.223000 audit[6592]: CRED_ACQ pid=6592 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:32.324290 kernel: audit: type=1103 audit(1755043532.223:482): pid=6592 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:32.598747 sshd[6589]: pam_unix(sshd:session): session closed for user core Aug 13 00:05:32.599000 audit[6589]: USER_END pid=6589 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:32.630079 systemd[1]: sshd@8-10.200.20.35:22-10.200.16.10:59126.service: Deactivated successfully. Aug 13 00:05:32.599000 audit[6589]: CRED_DISP pid=6589 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:32.653403 kernel: audit: type=1106 audit(1755043532.599:483): pid=6589 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:32.653550 kernel: audit: type=1104 audit(1755043532.599:484): pid=6589 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:32.629000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.35:22-10.200.16.10:59126 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:05:32.653990 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 00:05:32.654108 systemd-logind[1566]: Session 11 logged out. Waiting for processes to exit. Aug 13 00:05:32.655272 systemd-logind[1566]: Removed session 11. Aug 13 00:05:37.676534 systemd[1]: Started sshd@9-10.200.20.35:22-10.200.16.10:59138.service. Aug 13 00:05:37.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.35:22-10.200.16.10:59138 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:05:37.683040 kernel: kauditd_printk_skb: 1 callbacks suppressed Aug 13 00:05:37.683146 kernel: audit: type=1130 audit(1755043537.676:486): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.35:22-10.200.16.10:59138 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:05:38.159000 audit[6643]: USER_ACCT pid=6643 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:38.160488 sshd[6643]: Accepted publickey for core from 10.200.16.10 port 59138 ssh2: RSA SHA256:J/je3NSfm2Jr+TQ4JtJfZPKSEEtI0uL9aC1/9TbPR4M Aug 13 00:05:38.164284 sshd[6643]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:05:38.163000 audit[6643]: CRED_ACQ pid=6643 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:38.208390 kernel: audit: type=1101 audit(1755043538.159:487): pid=6643 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:38.208521 kernel: audit: type=1103 audit(1755043538.163:488): pid=6643 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:38.212266 systemd[1]: Started session-12.scope. Aug 13 00:05:38.213380 systemd-logind[1566]: New session 12 of user core. Aug 13 00:05:38.224627 kernel: audit: type=1006 audit(1755043538.163:489): pid=6643 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Aug 13 00:05:38.224858 kernel: audit: type=1300 audit(1755043538.163:489): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc6c27c20 a2=3 a3=1 items=0 ppid=1 pid=6643 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:05:38.163000 audit[6643]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc6c27c20 a2=3 a3=1 items=0 ppid=1 pid=6643 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:05:38.163000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:05:38.258095 kernel: audit: type=1327 audit(1755043538.163:489): proctitle=737368643A20636F7265205B707269765D Aug 13 00:05:38.223000 audit[6643]: USER_START pid=6643 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:38.285223 kernel: audit: type=1105 audit(1755043538.223:490): pid=6643 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:38.286152 kernel: audit: type=1103 audit(1755043538.251:491): pid=6646 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:38.251000 audit[6646]: CRED_ACQ pid=6646 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:38.624888 sshd[6643]: pam_unix(sshd:session): session closed for user core Aug 13 00:05:38.625000 audit[6643]: USER_END pid=6643 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:38.628151 systemd-logind[1566]: Session 12 logged out. Waiting for processes to exit. Aug 13 00:05:38.629649 systemd[1]: sshd@9-10.200.20.35:22-10.200.16.10:59138.service: Deactivated successfully. Aug 13 00:05:38.630565 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 00:05:38.632142 systemd-logind[1566]: Removed session 12. Aug 13 00:05:38.625000 audit[6643]: CRED_DISP pid=6643 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:38.676259 kernel: audit: type=1106 audit(1755043538.625:492): pid=6643 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:38.676395 kernel: audit: type=1104 audit(1755043538.625:493): pid=6643 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:38.629000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.35:22-10.200.16.10:59138 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:05:38.718043 systemd[1]: Started sshd@10-10.200.20.35:22-10.200.16.10:59140.service. Aug 13 00:05:38.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.20.35:22-10.200.16.10:59140 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:05:39.202192 sshd[6657]: Accepted publickey for core from 10.200.16.10 port 59140 ssh2: RSA SHA256:J/je3NSfm2Jr+TQ4JtJfZPKSEEtI0uL9aC1/9TbPR4M Aug 13 00:05:39.201000 audit[6657]: USER_ACCT pid=6657 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:39.204000 audit[6657]: CRED_ACQ pid=6657 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:39.204000 audit[6657]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd52e4210 a2=3 a3=1 items=0 ppid=1 pid=6657 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:05:39.204000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:05:39.205884 sshd[6657]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:05:39.210983 systemd-logind[1566]: New session 13 of user core. Aug 13 00:05:39.211285 systemd[1]: Started session-13.scope. Aug 13 00:05:39.215000 audit[6657]: USER_START pid=6657 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:39.217000 audit[6660]: CRED_ACQ pid=6660 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:39.668374 sshd[6657]: pam_unix(sshd:session): session closed for user core Aug 13 00:05:39.668000 audit[6657]: USER_END pid=6657 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:39.669000 audit[6657]: CRED_DISP pid=6657 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:39.671000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.20.35:22-10.200.16.10:59140 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:05:39.671534 systemd[1]: sshd@10-10.200.20.35:22-10.200.16.10:59140.service: Deactivated successfully. Aug 13 00:05:39.673070 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 00:05:39.673557 systemd-logind[1566]: Session 13 logged out. Waiting for processes to exit. Aug 13 00:05:39.674447 systemd-logind[1566]: Removed session 13. Aug 13 00:05:39.747787 systemd[1]: Started sshd@11-10.200.20.35:22-10.200.16.10:59152.service. Aug 13 00:05:39.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.20.35:22-10.200.16.10:59152 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:05:40.232000 audit[6670]: USER_ACCT pid=6670 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:40.233353 sshd[6670]: Accepted publickey for core from 10.200.16.10 port 59152 ssh2: RSA SHA256:J/je3NSfm2Jr+TQ4JtJfZPKSEEtI0uL9aC1/9TbPR4M Aug 13 00:05:40.233000 audit[6670]: CRED_ACQ pid=6670 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:40.234000 audit[6670]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc60e47f0 a2=3 a3=1 items=0 ppid=1 pid=6670 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:05:40.234000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:05:40.234979 sshd[6670]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:05:40.238710 systemd-logind[1566]: New session 14 of user core. Aug 13 00:05:40.239424 systemd[1]: Started session-14.scope. Aug 13 00:05:40.243000 audit[6670]: USER_START pid=6670 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:40.244000 audit[6673]: CRED_ACQ pid=6673 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:40.427420 systemd[1]: run-containerd-runc-k8s.io-c737d97130b318ae81de8ac79f780f355b3ab9c9c6150d75fecb1770449ecc14-runc.LmD3Fe.mount: Deactivated successfully. Aug 13 00:05:40.661502 sshd[6670]: pam_unix(sshd:session): session closed for user core Aug 13 00:05:40.662000 audit[6670]: USER_END pid=6670 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:40.662000 audit[6670]: CRED_DISP pid=6670 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:40.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.20.35:22-10.200.16.10:59152 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:05:40.664823 systemd-logind[1566]: Session 14 logged out. Waiting for processes to exit. Aug 13 00:05:40.664955 systemd[1]: sshd@11-10.200.20.35:22-10.200.16.10:59152.service: Deactivated successfully. Aug 13 00:05:40.665826 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 00:05:40.666271 systemd-logind[1566]: Removed session 14. Aug 13 00:05:45.740031 kernel: kauditd_printk_skb: 23 callbacks suppressed Aug 13 00:05:45.740172 kernel: audit: type=1130 audit(1755043545.733:513): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.35:22-10.200.16.10:37934 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:05:45.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.35:22-10.200.16.10:37934 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:05:45.734008 systemd[1]: Started sshd@12-10.200.20.35:22-10.200.16.10:37934.service. Aug 13 00:05:46.207000 audit[6707]: USER_ACCT pid=6707 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:46.216362 sshd[6707]: Accepted publickey for core from 10.200.16.10 port 37934 ssh2: RSA SHA256:J/je3NSfm2Jr+TQ4JtJfZPKSEEtI0uL9aC1/9TbPR4M Aug 13 00:05:46.218476 sshd[6707]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:05:46.213000 audit[6707]: CRED_ACQ pid=6707 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:46.255693 kernel: audit: type=1101 audit(1755043546.207:514): pid=6707 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:46.255816 kernel: audit: type=1103 audit(1755043546.213:515): pid=6707 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:46.270932 kernel: audit: type=1006 audit(1755043546.213:516): pid=6707 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Aug 13 00:05:46.259400 systemd-logind[1566]: New session 15 of user core. Aug 13 00:05:46.259990 systemd[1]: Started session-15.scope. Aug 13 00:05:46.213000 audit[6707]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc06b5ab0 a2=3 a3=1 items=0 ppid=1 pid=6707 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:05:46.295963 kernel: audit: type=1300 audit(1755043546.213:516): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc06b5ab0 a2=3 a3=1 items=0 ppid=1 pid=6707 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:05:46.213000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:05:46.304318 kernel: audit: type=1327 audit(1755043546.213:516): proctitle=737368643A20636F7265205B707269765D Aug 13 00:05:46.262000 audit[6707]: USER_START pid=6707 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:46.330866 kernel: audit: type=1105 audit(1755043546.262:517): pid=6707 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:46.262000 audit[6709]: CRED_ACQ pid=6709 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:46.352795 kernel: audit: type=1103 audit(1755043546.262:518): pid=6709 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:46.635552 sshd[6707]: pam_unix(sshd:session): session closed for user core Aug 13 00:05:46.635000 audit[6707]: USER_END pid=6707 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:46.638649 systemd-logind[1566]: Session 15 logged out. Waiting for processes to exit. Aug 13 00:05:46.640037 systemd[1]: sshd@12-10.200.20.35:22-10.200.16.10:37934.service: Deactivated successfully. Aug 13 00:05:46.640953 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 00:05:46.642432 systemd-logind[1566]: Removed session 15. Aug 13 00:05:46.636000 audit[6707]: CRED_DISP pid=6707 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:46.686025 kernel: audit: type=1106 audit(1755043546.635:519): pid=6707 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:46.686147 kernel: audit: type=1104 audit(1755043546.636:520): pid=6707 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:46.639000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.35:22-10.200.16.10:37934 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:05:51.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.35:22-10.200.16.10:38702 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:05:51.711131 systemd[1]: Started sshd@13-10.200.20.35:22-10.200.16.10:38702.service. Aug 13 00:05:51.716741 kernel: kauditd_printk_skb: 1 callbacks suppressed Aug 13 00:05:51.716858 kernel: audit: type=1130 audit(1755043551.710:522): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.35:22-10.200.16.10:38702 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:05:52.177000 audit[6720]: USER_ACCT pid=6720 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:52.178363 sshd[6720]: Accepted publickey for core from 10.200.16.10 port 38702 ssh2: RSA SHA256:J/je3NSfm2Jr+TQ4JtJfZPKSEEtI0uL9aC1/9TbPR4M Aug 13 00:05:52.203703 kernel: audit: type=1101 audit(1755043552.177:523): pid=6720 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:52.202000 audit[6720]: CRED_ACQ pid=6720 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:52.204092 sshd[6720]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:05:52.241025 kernel: audit: type=1103 audit(1755043552.202:524): pid=6720 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:52.241113 kernel: audit: type=1006 audit(1755043552.203:525): pid=6720 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Aug 13 00:05:52.203000 audit[6720]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdce74d90 a2=3 a3=1 items=0 ppid=1 pid=6720 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:05:52.244449 systemd[1]: Started session-16.scope. Aug 13 00:05:52.245394 systemd-logind[1566]: New session 16 of user core. Aug 13 00:05:52.203000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:05:52.274466 kernel: audit: type=1300 audit(1755043552.203:525): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdce74d90 a2=3 a3=1 items=0 ppid=1 pid=6720 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:05:52.274556 kernel: audit: type=1327 audit(1755043552.203:525): proctitle=737368643A20636F7265205B707269765D Aug 13 00:05:52.249000 audit[6720]: USER_START pid=6720 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:52.251000 audit[6722]: CRED_ACQ pid=6722 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:52.322774 kernel: audit: type=1105 audit(1755043552.249:526): pid=6720 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:52.322867 kernel: audit: type=1103 audit(1755043552.251:527): pid=6722 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:52.629567 sshd[6720]: pam_unix(sshd:session): session closed for user core Aug 13 00:05:52.630000 audit[6720]: USER_END pid=6720 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:52.632815 systemd-logind[1566]: Session 16 logged out. Waiting for processes to exit. Aug 13 00:05:52.634147 systemd[1]: sshd@13-10.200.20.35:22-10.200.16.10:38702.service: Deactivated successfully. Aug 13 00:05:52.635017 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 00:05:52.636434 systemd-logind[1566]: Removed session 16. Aug 13 00:05:52.630000 audit[6720]: CRED_DISP pid=6720 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:52.679550 kernel: audit: type=1106 audit(1755043552.630:528): pid=6720 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:52.679717 kernel: audit: type=1104 audit(1755043552.630:529): pid=6720 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:52.633000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.35:22-10.200.16.10:38702 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:05:57.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.35:22-10.200.16.10:38716 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:05:57.706257 systemd[1]: Started sshd@14-10.200.20.35:22-10.200.16.10:38716.service. Aug 13 00:05:57.711695 kernel: kauditd_printk_skb: 1 callbacks suppressed Aug 13 00:05:57.711807 kernel: audit: type=1130 audit(1755043557.705:531): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.35:22-10.200.16.10:38716 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:05:58.177000 audit[6753]: USER_ACCT pid=6753 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:58.178439 sshd[6753]: Accepted publickey for core from 10.200.16.10 port 38716 ssh2: RSA SHA256:J/je3NSfm2Jr+TQ4JtJfZPKSEEtI0uL9aC1/9TbPR4M Aug 13 00:05:58.203628 sshd[6753]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:05:58.202000 audit[6753]: CRED_ACQ pid=6753 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:58.215678 systemd[1]: Started session-17.scope. Aug 13 00:05:58.237320 kernel: audit: type=1101 audit(1755043558.177:532): pid=6753 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:58.237479 kernel: audit: type=1103 audit(1755043558.202:533): pid=6753 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:58.237439 systemd-logind[1566]: New session 17 of user core. Aug 13 00:05:58.262411 kernel: audit: type=1006 audit(1755043558.202:534): pid=6753 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Aug 13 00:05:58.202000 audit[6753]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff586ba40 a2=3 a3=1 items=0 ppid=1 pid=6753 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:05:58.293662 kernel: audit: type=1300 audit(1755043558.202:534): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff586ba40 a2=3 a3=1 items=0 ppid=1 pid=6753 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:05:58.202000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:05:58.308937 kernel: audit: type=1327 audit(1755043558.202:534): proctitle=737368643A20636F7265205B707269765D Aug 13 00:05:58.242000 audit[6753]: USER_START pid=6753 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:58.335831 kernel: audit: type=1105 audit(1755043558.242:535): pid=6753 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:58.243000 audit[6756]: CRED_ACQ pid=6756 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:58.358105 kernel: audit: type=1103 audit(1755043558.243:536): pid=6756 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:58.717175 sshd[6753]: pam_unix(sshd:session): session closed for user core Aug 13 00:05:58.717000 audit[6753]: USER_END pid=6753 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:58.720000 systemd[1]: sshd@14-10.200.20.35:22-10.200.16.10:38716.service: Deactivated successfully. Aug 13 00:05:58.720930 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 00:05:58.746670 systemd-logind[1566]: Session 17 logged out. Waiting for processes to exit. Aug 13 00:05:58.747523 systemd-logind[1566]: Removed session 17. Aug 13 00:05:58.717000 audit[6753]: CRED_DISP pid=6753 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:58.794990 kernel: audit: type=1106 audit(1755043558.717:537): pid=6753 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:58.795134 kernel: audit: type=1104 audit(1755043558.717:538): pid=6753 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:05:58.719000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.35:22-10.200.16.10:38716 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:06:03.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.35:22-10.200.16.10:42712 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:06:03.794316 systemd[1]: Started sshd@15-10.200.20.35:22-10.200.16.10:42712.service. Aug 13 00:06:03.799517 kernel: kauditd_printk_skb: 1 callbacks suppressed Aug 13 00:06:03.799606 kernel: audit: type=1130 audit(1755043563.793:540): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.35:22-10.200.16.10:42712 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:06:04.199718 systemd[1]: run-containerd-runc-k8s.io-f8e39e7b31e6506e0cdfb6ac5a659be058ceeb9f650275f824d88a249493d425-runc.W2eV92.mount: Deactivated successfully. Aug 13 00:06:04.263000 audit[6770]: USER_ACCT pid=6770 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:04.264023 sshd[6770]: Accepted publickey for core from 10.200.16.10 port 42712 ssh2: RSA SHA256:J/je3NSfm2Jr+TQ4JtJfZPKSEEtI0uL9aC1/9TbPR4M Aug 13 00:06:04.289684 kernel: audit: type=1101 audit(1755043564.263:541): pid=6770 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:04.289780 kernel: audit: type=1103 audit(1755043564.287:542): pid=6770 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:04.287000 audit[6770]: CRED_ACQ pid=6770 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:04.289033 sshd[6770]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:06:04.311850 kernel: audit: type=1006 audit(1755043564.288:543): pid=6770 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Aug 13 00:06:04.315039 systemd[1]: Started session-18.scope. Aug 13 00:06:04.315991 systemd-logind[1566]: New session 18 of user core. Aug 13 00:06:04.288000 audit[6770]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff3a008c0 a2=3 a3=1 items=0 ppid=1 pid=6770 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:06:04.351051 kernel: audit: type=1300 audit(1755043564.288:543): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff3a008c0 a2=3 a3=1 items=0 ppid=1 pid=6770 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:06:04.288000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:06:04.359440 kernel: audit: type=1327 audit(1755043564.288:543): proctitle=737368643A20636F7265205B707269765D Aug 13 00:06:04.359553 kernel: audit: type=1105 audit(1755043564.320:544): pid=6770 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:04.320000 audit[6770]: USER_START pid=6770 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:04.326000 audit[6794]: CRED_ACQ pid=6794 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:04.407501 kernel: audit: type=1103 audit(1755043564.326:545): pid=6794 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:04.696053 sshd[6770]: pam_unix(sshd:session): session closed for user core Aug 13 00:06:04.696000 audit[6770]: USER_END pid=6770 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:04.699255 systemd-logind[1566]: Session 18 logged out. Waiting for processes to exit. Aug 13 00:06:04.700690 systemd[1]: sshd@15-10.200.20.35:22-10.200.16.10:42712.service: Deactivated successfully. Aug 13 00:06:04.701557 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 00:06:04.703000 systemd-logind[1566]: Removed session 18. Aug 13 00:06:04.696000 audit[6770]: CRED_DISP pid=6770 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:04.745713 kernel: audit: type=1106 audit(1755043564.696:546): pid=6770 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:04.745823 kernel: audit: type=1104 audit(1755043564.696:547): pid=6770 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:04.696000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.35:22-10.200.16.10:42712 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:06:04.775844 systemd[1]: Started sshd@16-10.200.20.35:22-10.200.16.10:42724.service. Aug 13 00:06:04.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.20.35:22-10.200.16.10:42724 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:06:05.259000 audit[6803]: USER_ACCT pid=6803 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:05.260940 sshd[6803]: Accepted publickey for core from 10.200.16.10 port 42724 ssh2: RSA SHA256:J/je3NSfm2Jr+TQ4JtJfZPKSEEtI0uL9aC1/9TbPR4M Aug 13 00:06:05.261000 audit[6803]: CRED_ACQ pid=6803 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:05.261000 audit[6803]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd3a16f40 a2=3 a3=1 items=0 ppid=1 pid=6803 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:06:05.261000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:06:05.264106 sshd[6803]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:06:05.269301 systemd[1]: Started session-19.scope. Aug 13 00:06:05.270588 systemd-logind[1566]: New session 19 of user core. Aug 13 00:06:05.276000 audit[6803]: USER_START pid=6803 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:05.277000 audit[6806]: CRED_ACQ pid=6806 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:05.843254 sshd[6803]: pam_unix(sshd:session): session closed for user core Aug 13 00:06:05.843000 audit[6803]: USER_END pid=6803 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:05.844000 audit[6803]: CRED_DISP pid=6803 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:05.845000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.20.35:22-10.200.16.10:42724 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:06:05.846327 systemd[1]: sshd@16-10.200.20.35:22-10.200.16.10:42724.service: Deactivated successfully. Aug 13 00:06:05.847731 systemd-logind[1566]: Session 19 logged out. Waiting for processes to exit. Aug 13 00:06:05.847766 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 00:06:05.848928 systemd-logind[1566]: Removed session 19. Aug 13 00:06:05.922705 systemd[1]: Started sshd@17-10.200.20.35:22-10.200.16.10:42730.service. Aug 13 00:06:05.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.20.35:22-10.200.16.10:42730 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:06:06.125118 systemd[1]: run-containerd-runc-k8s.io-f8e39e7b31e6506e0cdfb6ac5a659be058ceeb9f650275f824d88a249493d425-runc.jLUp7D.mount: Deactivated successfully. Aug 13 00:06:06.411000 audit[6814]: USER_ACCT pid=6814 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:06.412221 sshd[6814]: Accepted publickey for core from 10.200.16.10 port 42730 ssh2: RSA SHA256:J/je3NSfm2Jr+TQ4JtJfZPKSEEtI0uL9aC1/9TbPR4M Aug 13 00:06:06.413000 audit[6814]: CRED_ACQ pid=6814 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:06.413000 audit[6814]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc9d5d1f0 a2=3 a3=1 items=0 ppid=1 pid=6814 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:06:06.413000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:06:06.414057 sshd[6814]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:06:06.418079 systemd-logind[1566]: New session 20 of user core. Aug 13 00:06:06.418536 systemd[1]: Started session-20.scope. Aug 13 00:06:06.423000 audit[6814]: USER_START pid=6814 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:06.424000 audit[6837]: CRED_ACQ pid=6837 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:08.341000 audit[6870]: NETFILTER_CFG table=filter:158 family=2 entries=20 op=nft_register_rule pid=6870 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:06:08.341000 audit[6870]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11944 a0=3 a1=ffffe3882c60 a2=0 a3=1 items=0 ppid=2755 pid=6870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:06:08.341000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:06:08.349000 audit[6870]: NETFILTER_CFG table=nat:159 family=2 entries=26 op=nft_register_rule pid=6870 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:06:08.349000 audit[6870]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8076 a0=3 a1=ffffe3882c60 a2=0 a3=1 items=0 ppid=2755 pid=6870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:06:08.349000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:06:08.374000 audit[6872]: NETFILTER_CFG table=filter:160 family=2 entries=32 op=nft_register_rule pid=6872 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:06:08.374000 audit[6872]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11944 a0=3 a1=ffffc437d5c0 a2=0 a3=1 items=0 ppid=2755 pid=6872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:06:08.374000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:06:08.379000 audit[6872]: NETFILTER_CFG table=nat:161 family=2 entries=26 op=nft_register_rule pid=6872 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:06:08.379000 audit[6872]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8076 a0=3 a1=ffffc437d5c0 a2=0 a3=1 items=0 ppid=2755 pid=6872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:06:08.379000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:06:08.427004 sshd[6814]: pam_unix(sshd:session): session closed for user core Aug 13 00:06:08.428000 audit[6814]: USER_END pid=6814 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:08.428000 audit[6814]: CRED_DISP pid=6814 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:08.432000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.20.35:22-10.200.16.10:42730 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:06:08.432424 systemd-logind[1566]: Session 20 logged out. Waiting for processes to exit. Aug 13 00:06:08.432569 systemd[1]: sshd@17-10.200.20.35:22-10.200.16.10:42730.service: Deactivated successfully. Aug 13 00:06:08.433467 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 00:06:08.433932 systemd-logind[1566]: Removed session 20. Aug 13 00:06:08.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.20.35:22-10.200.16.10:42736 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:06:08.506428 systemd[1]: Started sshd@18-10.200.20.35:22-10.200.16.10:42736.service. Aug 13 00:06:08.993000 audit[6875]: USER_ACCT pid=6875 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:08.996087 sshd[6875]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:06:08.996797 sshd[6875]: Accepted publickey for core from 10.200.16.10 port 42736 ssh2: RSA SHA256:J/je3NSfm2Jr+TQ4JtJfZPKSEEtI0uL9aC1/9TbPR4M Aug 13 00:06:08.999896 kernel: kauditd_printk_skb: 36 callbacks suppressed Aug 13 00:06:08.999997 kernel: audit: type=1101 audit(1755043568.993:572): pid=6875 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:08.995000 audit[6875]: CRED_ACQ pid=6875 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:09.048824 kernel: audit: type=1103 audit(1755043568.995:573): pid=6875 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:09.063100 kernel: audit: type=1006 audit(1755043568.995:574): pid=6875 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Aug 13 00:06:08.995000 audit[6875]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffda4c3020 a2=3 a3=1 items=0 ppid=1 pid=6875 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:06:09.087864 kernel: audit: type=1300 audit(1755043568.995:574): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffda4c3020 a2=3 a3=1 items=0 ppid=1 pid=6875 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:06:08.995000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:06:09.090335 systemd[1]: Started session-21.scope. Aug 13 00:06:09.096312 kernel: audit: type=1327 audit(1755043568.995:574): proctitle=737368643A20636F7265205B707269765D Aug 13 00:06:09.096448 systemd-logind[1566]: New session 21 of user core. Aug 13 00:06:09.100000 audit[6875]: USER_START pid=6875 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:09.130254 kernel: audit: type=1105 audit(1755043569.100:575): pid=6875 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:09.130353 kernel: audit: type=1103 audit(1755043569.129:576): pid=6878 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:09.129000 audit[6878]: CRED_ACQ pid=6878 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:09.612051 sshd[6875]: pam_unix(sshd:session): session closed for user core Aug 13 00:06:09.612000 audit[6875]: USER_END pid=6875 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:09.616573 systemd[1]: sshd@18-10.200.20.35:22-10.200.16.10:42736.service: Deactivated successfully. Aug 13 00:06:09.617520 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 00:06:09.641735 systemd-logind[1566]: Session 21 logged out. Waiting for processes to exit. Aug 13 00:06:09.612000 audit[6875]: CRED_DISP pid=6875 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:09.664299 kernel: audit: type=1106 audit(1755043569.612:577): pid=6875 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:09.664434 kernel: audit: type=1104 audit(1755043569.612:578): pid=6875 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:09.612000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.20.35:22-10.200.16.10:42736 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:06:09.686392 kernel: audit: type=1131 audit(1755043569.612:579): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.20.35:22-10.200.16.10:42736 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:06:09.686711 systemd-logind[1566]: Removed session 21. Aug 13 00:06:09.689325 systemd[1]: Started sshd@19-10.200.20.35:22-10.200.16.10:42752.service. Aug 13 00:06:09.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.35:22-10.200.16.10:42752 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:06:10.158000 audit[6888]: USER_ACCT pid=6888 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:10.159574 sshd[6888]: Accepted publickey for core from 10.200.16.10 port 42752 ssh2: RSA SHA256:J/je3NSfm2Jr+TQ4JtJfZPKSEEtI0uL9aC1/9TbPR4M Aug 13 00:06:10.159000 audit[6888]: CRED_ACQ pid=6888 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:10.160000 audit[6888]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffccdf21c0 a2=3 a3=1 items=0 ppid=1 pid=6888 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:06:10.160000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:06:10.161181 sshd[6888]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:06:10.165377 systemd-logind[1566]: New session 22 of user core. Aug 13 00:06:10.165886 systemd[1]: Started session-22.scope. Aug 13 00:06:10.169000 audit[6888]: USER_START pid=6888 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:10.171000 audit[6891]: CRED_ACQ pid=6891 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:10.620456 sshd[6888]: pam_unix(sshd:session): session closed for user core Aug 13 00:06:10.620000 audit[6888]: USER_END pid=6888 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:10.621000 audit[6888]: CRED_DISP pid=6888 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:10.623788 systemd-logind[1566]: Session 22 logged out. Waiting for processes to exit. Aug 13 00:06:10.623930 systemd[1]: sshd@19-10.200.20.35:22-10.200.16.10:42752.service: Deactivated successfully. Aug 13 00:06:10.624864 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 00:06:10.623000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.35:22-10.200.16.10:42752 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:06:10.625356 systemd-logind[1566]: Removed session 22. Aug 13 00:06:15.366000 audit[6921]: NETFILTER_CFG table=filter:162 family=2 entries=20 op=nft_register_rule pid=6921 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:06:15.371908 kernel: kauditd_printk_skb: 11 callbacks suppressed Aug 13 00:06:15.372010 kernel: audit: type=1325 audit(1755043575.366:589): table=filter:162 family=2 entries=20 op=nft_register_rule pid=6921 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:06:15.366000 audit[6921]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3016 a0=3 a1=ffffcafd1e50 a2=0 a3=1 items=0 ppid=2755 pid=6921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:06:15.412988 kernel: audit: type=1300 audit(1755043575.366:589): arch=c00000b7 syscall=211 success=yes exit=3016 a0=3 a1=ffffcafd1e50 a2=0 a3=1 items=0 ppid=2755 pid=6921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:06:15.366000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:06:15.427010 kernel: audit: type=1327 audit(1755043575.366:589): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:06:15.429000 audit[6921]: NETFILTER_CFG table=nat:163 family=2 entries=110 op=nft_register_chain pid=6921 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:06:15.429000 audit[6921]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=50988 a0=3 a1=ffffcafd1e50 a2=0 a3=1 items=0 ppid=2755 pid=6921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:06:15.470410 kernel: audit: type=1325 audit(1755043575.429:590): table=nat:163 family=2 entries=110 op=nft_register_chain pid=6921 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:06:15.470530 kernel: audit: type=1300 audit(1755043575.429:590): arch=c00000b7 syscall=211 success=yes exit=50988 a0=3 a1=ffffcafd1e50 a2=0 a3=1 items=0 ppid=2755 pid=6921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:06:15.470562 kernel: audit: type=1327 audit(1755043575.429:590): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:06:15.429000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:06:15.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.35:22-10.200.16.10:46434 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:06:15.699464 systemd[1]: Started sshd@20-10.200.20.35:22-10.200.16.10:46434.service. Aug 13 00:06:15.723697 kernel: audit: type=1130 audit(1755043575.698:591): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.35:22-10.200.16.10:46434 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:06:16.184000 audit[6923]: USER_ACCT pid=6923 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:16.185378 sshd[6923]: Accepted publickey for core from 10.200.16.10 port 46434 ssh2: RSA SHA256:J/je3NSfm2Jr+TQ4JtJfZPKSEEtI0uL9aC1/9TbPR4M Aug 13 00:06:16.211792 kernel: audit: type=1101 audit(1755043576.184:592): pid=6923 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:16.211000 audit[6923]: CRED_ACQ pid=6923 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:16.213112 sshd[6923]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:06:16.250688 kernel: audit: type=1103 audit(1755043576.211:593): pid=6923 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:16.250829 kernel: audit: type=1006 audit(1755043576.212:594): pid=6923 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Aug 13 00:06:16.212000 audit[6923]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffde0d9f80 a2=3 a3=1 items=0 ppid=1 pid=6923 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:06:16.212000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:06:16.253315 systemd-logind[1566]: New session 23 of user core. Aug 13 00:06:16.254460 systemd[1]: Started session-23.scope. Aug 13 00:06:16.258000 audit[6923]: USER_START pid=6923 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:16.260000 audit[6926]: CRED_ACQ pid=6926 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:16.647923 sshd[6923]: pam_unix(sshd:session): session closed for user core Aug 13 00:06:16.648000 audit[6923]: USER_END pid=6923 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:16.648000 audit[6923]: CRED_DISP pid=6923 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:16.651380 systemd[1]: sshd@20-10.200.20.35:22-10.200.16.10:46434.service: Deactivated successfully. Aug 13 00:06:16.650000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.35:22-10.200.16.10:46434 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:06:16.652645 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 00:06:16.653037 systemd-logind[1566]: Session 23 logged out. Waiting for processes to exit. Aug 13 00:06:16.653890 systemd-logind[1566]: Removed session 23. Aug 13 00:06:21.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.35:22-10.200.16.10:56560 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:06:21.726295 systemd[1]: Started sshd@21-10.200.20.35:22-10.200.16.10:56560.service. Aug 13 00:06:21.731508 kernel: kauditd_printk_skb: 7 callbacks suppressed Aug 13 00:06:21.731586 kernel: audit: type=1130 audit(1755043581.725:600): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.35:22-10.200.16.10:56560 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:06:22.225000 audit[6937]: USER_ACCT pid=6937 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:22.227494 sshd[6937]: Accepted publickey for core from 10.200.16.10 port 56560 ssh2: RSA SHA256:J/je3NSfm2Jr+TQ4JtJfZPKSEEtI0uL9aC1/9TbPR4M Aug 13 00:06:22.252241 sshd[6937]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:06:22.251000 audit[6937]: CRED_ACQ pid=6937 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:22.276019 kernel: audit: type=1101 audit(1755043582.225:601): pid=6937 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:22.276181 kernel: audit: type=1103 audit(1755043582.251:602): pid=6937 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:22.291223 kernel: audit: type=1006 audit(1755043582.251:603): pid=6937 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Aug 13 00:06:22.251000 audit[6937]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe351b840 a2=3 a3=1 items=0 ppid=1 pid=6937 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:06:22.296743 systemd-logind[1566]: New session 24 of user core. Aug 13 00:06:22.297988 systemd[1]: Started session-24.scope. Aug 13 00:06:22.317237 kernel: audit: type=1300 audit(1755043582.251:603): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe351b840 a2=3 a3=1 items=0 ppid=1 pid=6937 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:06:22.251000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:06:22.326623 kernel: audit: type=1327 audit(1755043582.251:603): proctitle=737368643A20636F7265205B707269765D Aug 13 00:06:22.326000 audit[6937]: USER_START pid=6937 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:22.328000 audit[6940]: CRED_ACQ pid=6940 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:22.384416 kernel: audit: type=1105 audit(1755043582.326:604): pid=6937 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:22.384566 kernel: audit: type=1103 audit(1755043582.328:605): pid=6940 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:22.662332 sshd[6937]: pam_unix(sshd:session): session closed for user core Aug 13 00:06:22.662000 audit[6937]: USER_END pid=6937 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:22.695445 systemd[1]: sshd@21-10.200.20.35:22-10.200.16.10:56560.service: Deactivated successfully. Aug 13 00:06:22.696286 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 00:06:22.662000 audit[6937]: CRED_DISP pid=6937 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:22.723009 kernel: audit: type=1106 audit(1755043582.662:606): pid=6937 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:22.723132 kernel: audit: type=1104 audit(1755043582.662:607): pid=6937 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:22.697469 systemd-logind[1566]: Session 24 logged out. Waiting for processes to exit. Aug 13 00:06:22.698334 systemd-logind[1566]: Removed session 24. Aug 13 00:06:22.695000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.35:22-10.200.16.10:56560 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:06:27.769285 kernel: kauditd_printk_skb: 1 callbacks suppressed Aug 13 00:06:27.769440 kernel: audit: type=1130 audit(1755043587.741:609): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.35:22-10.200.16.10:56574 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:06:27.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.35:22-10.200.16.10:56574 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:06:27.742181 systemd[1]: Started sshd@22-10.200.20.35:22-10.200.16.10:56574.service. Aug 13 00:06:28.226000 audit[6957]: USER_ACCT pid=6957 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:28.228853 sshd[6957]: Accepted publickey for core from 10.200.16.10 port 56574 ssh2: RSA SHA256:J/je3NSfm2Jr+TQ4JtJfZPKSEEtI0uL9aC1/9TbPR4M Aug 13 00:06:28.236441 sshd[6957]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:06:28.235000 audit[6957]: CRED_ACQ pid=6957 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:28.274013 kernel: audit: type=1101 audit(1755043588.226:610): pid=6957 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:28.274147 kernel: audit: type=1103 audit(1755043588.235:611): pid=6957 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:28.289695 kernel: audit: type=1006 audit(1755043588.235:612): pid=6957 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Aug 13 00:06:28.235000 audit[6957]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe35d9d70 a2=3 a3=1 items=0 ppid=1 pid=6957 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:06:28.294367 systemd[1]: Started session-25.scope. Aug 13 00:06:28.295446 systemd-logind[1566]: New session 25 of user core. Aug 13 00:06:28.315129 kernel: audit: type=1300 audit(1755043588.235:612): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe35d9d70 a2=3 a3=1 items=0 ppid=1 pid=6957 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:06:28.235000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:06:28.323804 kernel: audit: type=1327 audit(1755043588.235:612): proctitle=737368643A20636F7265205B707269765D Aug 13 00:06:28.324000 audit[6957]: USER_START pid=6957 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:28.324000 audit[6963]: CRED_ACQ pid=6963 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:28.377034 kernel: audit: type=1105 audit(1755043588.324:613): pid=6957 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:28.377151 kernel: audit: type=1103 audit(1755043588.324:614): pid=6963 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:28.670752 sshd[6957]: pam_unix(sshd:session): session closed for user core Aug 13 00:06:28.671000 audit[6957]: USER_END pid=6957 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:28.676000 audit[6957]: CRED_DISP pid=6957 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:28.700137 systemd[1]: sshd@22-10.200.20.35:22-10.200.16.10:56574.service: Deactivated successfully. Aug 13 00:06:28.723222 kernel: audit: type=1106 audit(1755043588.671:615): pid=6957 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:28.723383 kernel: audit: type=1104 audit(1755043588.676:616): pid=6957 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:28.723859 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 00:06:28.723903 systemd-logind[1566]: Session 25 logged out. Waiting for processes to exit. Aug 13 00:06:28.699000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.35:22-10.200.16.10:56574 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:06:28.725004 systemd-logind[1566]: Removed session 25. Aug 13 00:06:33.777826 kernel: kauditd_printk_skb: 1 callbacks suppressed Aug 13 00:06:33.777970 kernel: audit: type=1130 audit(1755043593.750:618): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.35:22-10.200.16.10:35690 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:06:33.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.35:22-10.200.16.10:35690 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:06:33.751226 systemd[1]: Started sshd@23-10.200.20.35:22-10.200.16.10:35690.service. Aug 13 00:06:34.235000 audit[6973]: USER_ACCT pid=6973 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:34.236180 sshd[6973]: Accepted publickey for core from 10.200.16.10 port 35690 ssh2: RSA SHA256:J/je3NSfm2Jr+TQ4JtJfZPKSEEtI0uL9aC1/9TbPR4M Aug 13 00:06:34.238354 sshd[6973]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:06:34.237000 audit[6973]: CRED_ACQ pid=6973 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:34.282401 kernel: audit: type=1101 audit(1755043594.235:619): pid=6973 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:34.282598 kernel: audit: type=1103 audit(1755043594.237:620): pid=6973 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:34.296549 kernel: audit: type=1006 audit(1755043594.237:621): pid=6973 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Aug 13 00:06:34.237000 audit[6973]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc52472c0 a2=3 a3=1 items=0 ppid=1 pid=6973 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:06:34.299942 systemd[1]: Started session-26.scope. Aug 13 00:06:34.300618 systemd-logind[1566]: New session 26 of user core. Aug 13 00:06:34.322009 kernel: audit: type=1300 audit(1755043594.237:621): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc52472c0 a2=3 a3=1 items=0 ppid=1 pid=6973 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:06:34.237000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:06:34.330584 kernel: audit: type=1327 audit(1755043594.237:621): proctitle=737368643A20636F7265205B707269765D Aug 13 00:06:34.330000 audit[6973]: USER_START pid=6973 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:34.357162 kernel: audit: type=1105 audit(1755043594.330:622): pid=6973 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:34.357280 kernel: audit: type=1103 audit(1755043594.332:623): pid=6976 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:34.332000 audit[6976]: CRED_ACQ pid=6976 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:34.695837 sshd[6973]: pam_unix(sshd:session): session closed for user core Aug 13 00:06:34.696000 audit[6973]: USER_END pid=6973 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:34.699320 systemd-logind[1566]: Session 26 logged out. Waiting for processes to exit. Aug 13 00:06:34.701021 systemd[1]: sshd@23-10.200.20.35:22-10.200.16.10:35690.service: Deactivated successfully. Aug 13 00:06:34.702091 systemd[1]: session-26.scope: Deactivated successfully. Aug 13 00:06:34.703774 systemd-logind[1566]: Removed session 26. Aug 13 00:06:34.696000 audit[6973]: CRED_DISP pid=6973 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:34.746482 kernel: audit: type=1106 audit(1755043594.696:624): pid=6973 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:34.746628 kernel: audit: type=1104 audit(1755043594.696:625): pid=6973 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:34.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.35:22-10.200.16.10:35690 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:06:39.778087 systemd[1]: Started sshd@24-10.200.20.35:22-10.200.16.10:35694.service. Aug 13 00:06:39.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.20.35:22-10.200.16.10:35694 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:06:39.785810 kernel: kauditd_printk_skb: 1 callbacks suppressed Aug 13 00:06:39.785880 kernel: audit: type=1130 audit(1755043599.778:627): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.20.35:22-10.200.16.10:35694 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:06:40.268000 audit[7028]: USER_ACCT pid=7028 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:40.269322 sshd[7028]: Accepted publickey for core from 10.200.16.10 port 35694 ssh2: RSA SHA256:J/je3NSfm2Jr+TQ4JtJfZPKSEEtI0uL9aC1/9TbPR4M Aug 13 00:06:40.295716 kernel: audit: type=1101 audit(1755043600.268:628): pid=7028 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:40.295000 audit[7028]: CRED_ACQ pid=7028 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:40.296404 sshd[7028]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:06:40.334768 kernel: audit: type=1103 audit(1755043600.295:629): pid=7028 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:40.334948 kernel: audit: type=1006 audit(1755043600.295:630): pid=7028 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Aug 13 00:06:40.295000 audit[7028]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffffce0690 a2=3 a3=1 items=0 ppid=1 pid=7028 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:06:40.339796 systemd[1]: Started session-27.scope. Aug 13 00:06:40.340640 systemd-logind[1566]: New session 27 of user core. Aug 13 00:06:40.360145 kernel: audit: type=1300 audit(1755043600.295:630): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffffce0690 a2=3 a3=1 items=0 ppid=1 pid=7028 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:06:40.295000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:06:40.369087 kernel: audit: type=1327 audit(1755043600.295:630): proctitle=737368643A20636F7265205B707269765D Aug 13 00:06:40.344000 audit[7028]: USER_START pid=7028 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:40.396096 kernel: audit: type=1105 audit(1755043600.344:631): pid=7028 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:40.400334 kernel: audit: type=1103 audit(1755043600.346:632): pid=7030 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:40.346000 audit[7030]: CRED_ACQ pid=7030 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:40.709867 sshd[7028]: pam_unix(sshd:session): session closed for user core Aug 13 00:06:40.710000 audit[7028]: USER_END pid=7028 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:40.714237 systemd-logind[1566]: Session 27 logged out. Waiting for processes to exit. Aug 13 00:06:40.715670 systemd[1]: sshd@24-10.200.20.35:22-10.200.16.10:35694.service: Deactivated successfully. Aug 13 00:06:40.716588 systemd[1]: session-27.scope: Deactivated successfully. Aug 13 00:06:40.718321 systemd-logind[1566]: Removed session 27. Aug 13 00:06:40.711000 audit[7028]: CRED_DISP pid=7028 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:40.762777 kernel: audit: type=1106 audit(1755043600.710:633): pid=7028 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:40.762902 kernel: audit: type=1104 audit(1755043600.711:634): pid=7028 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:40.715000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.20.35:22-10.200.16.10:35694 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:06:45.814339 kernel: kauditd_printk_skb: 1 callbacks suppressed Aug 13 00:06:45.814436 kernel: audit: type=1130 audit(1755043605.786:636): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.20.35:22-10.200.16.10:50098 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:06:45.786000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.20.35:22-10.200.16.10:50098 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:06:45.787243 systemd[1]: Started sshd@25-10.200.20.35:22-10.200.16.10:50098.service. Aug 13 00:06:46.255000 audit[7063]: USER_ACCT pid=7063 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:46.259870 sshd[7063]: Accepted publickey for core from 10.200.16.10 port 50098 ssh2: RSA SHA256:J/je3NSfm2Jr+TQ4JtJfZPKSEEtI0uL9aC1/9TbPR4M Aug 13 00:06:46.282347 sshd[7063]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:06:46.282743 kernel: audit: type=1101 audit(1755043606.255:637): pid=7063 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:46.282809 kernel: audit: type=1103 audit(1755043606.281:638): pid=7063 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:46.281000 audit[7063]: CRED_ACQ pid=7063 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:46.320324 kernel: audit: type=1006 audit(1755043606.281:639): pid=7063 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Aug 13 00:06:46.320781 kernel: audit: type=1300 audit(1755043606.281:639): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff36323d0 a2=3 a3=1 items=0 ppid=1 pid=7063 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:06:46.281000 audit[7063]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff36323d0 a2=3 a3=1 items=0 ppid=1 pid=7063 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:06:46.281000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:06:46.353316 kernel: audit: type=1327 audit(1755043606.281:639): proctitle=737368643A20636F7265205B707269765D Aug 13 00:06:46.356304 systemd-logind[1566]: New session 28 of user core. Aug 13 00:06:46.356839 systemd[1]: Started session-28.scope. Aug 13 00:06:46.361000 audit[7063]: USER_START pid=7063 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:46.388000 audit[7066]: CRED_ACQ pid=7066 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:46.410959 kernel: audit: type=1105 audit(1755043606.361:640): pid=7063 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:46.411107 kernel: audit: type=1103 audit(1755043606.388:641): pid=7066 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:46.728885 sshd[7063]: pam_unix(sshd:session): session closed for user core Aug 13 00:06:46.729000 audit[7063]: USER_END pid=7063 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:46.757077 systemd[1]: sshd@25-10.200.20.35:22-10.200.16.10:50098.service: Deactivated successfully. Aug 13 00:06:46.730000 audit[7063]: CRED_DISP pid=7063 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:46.757916 systemd[1]: session-28.scope: Deactivated successfully. Aug 13 00:06:46.779271 kernel: audit: type=1106 audit(1755043606.729:642): pid=7063 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:46.779362 kernel: audit: type=1104 audit(1755043606.730:643): pid=7063 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Aug 13 00:06:46.756000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.20.35:22-10.200.16.10:50098 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:06:46.779969 systemd-logind[1566]: Session 28 logged out. Waiting for processes to exit. Aug 13 00:06:46.781017 systemd-logind[1566]: Removed session 28.