Mar 17 18:49:21.046878 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 17 18:49:21.046897 kernel: Linux version 5.15.179-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Mar 17 17:11:44 -00 2025 Mar 17 18:49:21.046904 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Mar 17 18:49:21.046911 kernel: printk: bootconsole [pl11] enabled Mar 17 18:49:21.046916 kernel: efi: EFI v2.70 by EDK II Mar 17 18:49:21.046921 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3763cf98 Mar 17 18:49:21.046928 kernel: random: crng init done Mar 17 18:49:21.046933 kernel: ACPI: Early table checksum verification disabled Mar 17 18:49:21.046938 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Mar 17 18:49:21.046944 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 18:49:21.046949 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 18:49:21.046955 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Mar 17 18:49:21.046961 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 18:49:21.046967 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 18:49:21.046974 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 18:49:21.046979 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 18:49:21.046985 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 18:49:21.046992 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 18:49:21.046998 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Mar 17 18:49:21.047004 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 18:49:21.047009 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Mar 17 18:49:21.047015 kernel: NUMA: Failed to initialise from firmware Mar 17 18:49:21.047021 kernel: NUMA: Faking a node at [mem 0x0000000000000000-0x00000001bfffffff] Mar 17 18:49:21.047027 kernel: NUMA: NODE_DATA [mem 0x1bf7f3900-0x1bf7f8fff] Mar 17 18:49:21.047032 kernel: Zone ranges: Mar 17 18:49:21.047038 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Mar 17 18:49:21.047044 kernel: DMA32 empty Mar 17 18:49:21.047049 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Mar 17 18:49:21.047056 kernel: Movable zone start for each node Mar 17 18:49:21.047062 kernel: Early memory node ranges Mar 17 18:49:21.047067 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Mar 17 18:49:21.047073 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Mar 17 18:49:21.047079 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Mar 17 18:49:21.047084 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Mar 17 18:49:21.047090 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Mar 17 18:49:21.047095 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Mar 17 18:49:21.047101 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Mar 17 18:49:21.047107 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Mar 17 18:49:21.047112 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Mar 17 18:49:21.047119 kernel: psci: probing for conduit method from ACPI. Mar 17 18:49:21.047127 kernel: psci: PSCIv1.1 detected in firmware. Mar 17 18:49:21.047133 kernel: psci: Using standard PSCI v0.2 function IDs Mar 17 18:49:21.047140 kernel: psci: MIGRATE_INFO_TYPE not supported. Mar 17 18:49:21.047145 kernel: psci: SMC Calling Convention v1.4 Mar 17 18:49:21.047151 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node -1 Mar 17 18:49:21.047159 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node -1 Mar 17 18:49:21.047165 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 Mar 17 18:49:21.047171 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 Mar 17 18:49:21.047177 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 17 18:49:21.047183 kernel: Detected PIPT I-cache on CPU0 Mar 17 18:49:21.047189 kernel: CPU features: detected: GIC system register CPU interface Mar 17 18:49:21.047195 kernel: CPU features: detected: Hardware dirty bit management Mar 17 18:49:21.047201 kernel: CPU features: detected: Spectre-BHB Mar 17 18:49:21.047207 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 17 18:49:21.047213 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 17 18:49:21.047219 kernel: CPU features: detected: ARM erratum 1418040 Mar 17 18:49:21.047226 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Mar 17 18:49:21.047232 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 17 18:49:21.047239 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Mar 17 18:49:21.047245 kernel: Policy zone: Normal Mar 17 18:49:21.047252 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=e034db32d58fe7496a3db6ba3879dd9052cea2cf1597d65edfc7b26afc92530d Mar 17 18:49:21.047259 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 18:49:21.047265 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 17 18:49:21.047271 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 18:49:21.047277 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 18:49:21.047283 kernel: software IO TLB: mapped [mem 0x000000003a550000-0x000000003e550000] (64MB) Mar 17 18:49:21.047289 kernel: Memory: 3986944K/4194160K available (9792K kernel code, 2094K rwdata, 7584K rodata, 36416K init, 777K bss, 207216K reserved, 0K cma-reserved) Mar 17 18:49:21.047297 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 17 18:49:21.047303 kernel: trace event string verifier disabled Mar 17 18:49:21.047309 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 17 18:49:21.047316 kernel: rcu: RCU event tracing is enabled. Mar 17 18:49:21.047322 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 17 18:49:21.047328 kernel: Trampoline variant of Tasks RCU enabled. Mar 17 18:49:21.047334 kernel: Tracing variant of Tasks RCU enabled. Mar 17 18:49:21.047340 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 18:49:21.047346 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 17 18:49:21.047352 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 17 18:49:21.047358 kernel: GICv3: 960 SPIs implemented Mar 17 18:49:21.047365 kernel: GICv3: 0 Extended SPIs implemented Mar 17 18:49:21.047371 kernel: GICv3: Distributor has no Range Selector support Mar 17 18:49:21.047377 kernel: Root IRQ handler: gic_handle_irq Mar 17 18:49:21.047383 kernel: GICv3: 16 PPIs implemented Mar 17 18:49:21.047389 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Mar 17 18:49:21.047395 kernel: ITS: No ITS available, not enabling LPIs Mar 17 18:49:21.047401 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 18:49:21.047407 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 17 18:49:21.047414 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 17 18:49:21.047420 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 17 18:49:21.047426 kernel: Console: colour dummy device 80x25 Mar 17 18:49:21.047434 kernel: printk: console [tty1] enabled Mar 17 18:49:21.047440 kernel: ACPI: Core revision 20210730 Mar 17 18:49:21.047446 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 17 18:49:21.047453 kernel: pid_max: default: 32768 minimum: 301 Mar 17 18:49:21.047459 kernel: LSM: Security Framework initializing Mar 17 18:49:21.047465 kernel: SELinux: Initializing. Mar 17 18:49:21.047471 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 18:49:21.047477 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 18:49:21.047483 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Mar 17 18:49:21.047490 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Mar 17 18:49:21.047497 kernel: rcu: Hierarchical SRCU implementation. Mar 17 18:49:21.047503 kernel: Remapping and enabling EFI services. Mar 17 18:49:21.047509 kernel: smp: Bringing up secondary CPUs ... Mar 17 18:49:21.047515 kernel: Detected PIPT I-cache on CPU1 Mar 17 18:49:21.047522 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Mar 17 18:49:21.047528 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 18:49:21.047534 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 17 18:49:21.047540 kernel: smp: Brought up 1 node, 2 CPUs Mar 17 18:49:21.047546 kernel: SMP: Total of 2 processors activated. Mar 17 18:49:21.047554 kernel: CPU features: detected: 32-bit EL0 Support Mar 17 18:49:21.047560 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Mar 17 18:49:21.047567 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 17 18:49:21.047573 kernel: CPU features: detected: CRC32 instructions Mar 17 18:49:21.047579 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 17 18:49:21.047585 kernel: CPU features: detected: LSE atomic instructions Mar 17 18:49:21.047591 kernel: CPU features: detected: Privileged Access Never Mar 17 18:49:21.049684 kernel: CPU: All CPU(s) started at EL1 Mar 17 18:49:21.049698 kernel: alternatives: patching kernel code Mar 17 18:49:21.049709 kernel: devtmpfs: initialized Mar 17 18:49:21.049720 kernel: KASLR enabled Mar 17 18:49:21.049728 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 18:49:21.049736 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 17 18:49:21.049742 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 18:49:21.049749 kernel: SMBIOS 3.1.0 present. Mar 17 18:49:21.049756 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Mar 17 18:49:21.049762 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 18:49:21.049770 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 17 18:49:21.049778 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 17 18:49:21.049785 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 17 18:49:21.049791 kernel: audit: initializing netlink subsys (disabled) Mar 17 18:49:21.049798 kernel: audit: type=2000 audit(0.089:1): state=initialized audit_enabled=0 res=1 Mar 17 18:49:21.049805 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 18:49:21.049811 kernel: cpuidle: using governor menu Mar 17 18:49:21.049818 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 17 18:49:21.049826 kernel: ASID allocator initialised with 32768 entries Mar 17 18:49:21.049833 kernel: ACPI: bus type PCI registered Mar 17 18:49:21.049840 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 18:49:21.049846 kernel: Serial: AMBA PL011 UART driver Mar 17 18:49:21.049853 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 18:49:21.049860 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Mar 17 18:49:21.049866 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 18:49:21.049873 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Mar 17 18:49:21.049880 kernel: cryptd: max_cpu_qlen set to 1000 Mar 17 18:49:21.049888 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 17 18:49:21.049895 kernel: ACPI: Added _OSI(Module Device) Mar 17 18:49:21.049902 kernel: ACPI: Added _OSI(Processor Device) Mar 17 18:49:21.049908 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 18:49:21.049915 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 18:49:21.049922 kernel: ACPI: Added _OSI(Linux-Dell-Video) Mar 17 18:49:21.049929 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Mar 17 18:49:21.049936 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Mar 17 18:49:21.049942 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 18:49:21.049950 kernel: ACPI: Interpreter enabled Mar 17 18:49:21.049957 kernel: ACPI: Using GIC for interrupt routing Mar 17 18:49:21.049963 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Mar 17 18:49:21.049970 kernel: printk: console [ttyAMA0] enabled Mar 17 18:49:21.049977 kernel: printk: bootconsole [pl11] disabled Mar 17 18:49:21.049983 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Mar 17 18:49:21.049990 kernel: iommu: Default domain type: Translated Mar 17 18:49:21.049997 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 17 18:49:21.050004 kernel: vgaarb: loaded Mar 17 18:49:21.050011 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 17 18:49:21.050019 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 17 18:49:21.050025 kernel: PTP clock support registered Mar 17 18:49:21.050032 kernel: Registered efivars operations Mar 17 18:49:21.050038 kernel: No ACPI PMU IRQ for CPU0 Mar 17 18:49:21.050045 kernel: No ACPI PMU IRQ for CPU1 Mar 17 18:49:21.050052 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 17 18:49:21.050058 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 18:49:21.050065 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 18:49:21.050073 kernel: pnp: PnP ACPI init Mar 17 18:49:21.050079 kernel: pnp: PnP ACPI: found 0 devices Mar 17 18:49:21.050086 kernel: NET: Registered PF_INET protocol family Mar 17 18:49:21.050093 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 17 18:49:21.050099 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 17 18:49:21.050106 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 18:49:21.050113 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 18:49:21.050120 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Mar 17 18:49:21.050126 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 17 18:49:21.050134 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 18:49:21.050141 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 18:49:21.050148 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 18:49:21.050154 kernel: PCI: CLS 0 bytes, default 64 Mar 17 18:49:21.050161 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Mar 17 18:49:21.050167 kernel: kvm [1]: HYP mode not available Mar 17 18:49:21.050174 kernel: Initialise system trusted keyrings Mar 17 18:49:21.050181 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 17 18:49:21.050187 kernel: Key type asymmetric registered Mar 17 18:49:21.050195 kernel: Asymmetric key parser 'x509' registered Mar 17 18:49:21.050201 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Mar 17 18:49:21.050208 kernel: io scheduler mq-deadline registered Mar 17 18:49:21.050215 kernel: io scheduler kyber registered Mar 17 18:49:21.050221 kernel: io scheduler bfq registered Mar 17 18:49:21.050228 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 18:49:21.050235 kernel: thunder_xcv, ver 1.0 Mar 17 18:49:21.050241 kernel: thunder_bgx, ver 1.0 Mar 17 18:49:21.050248 kernel: nicpf, ver 1.0 Mar 17 18:49:21.050254 kernel: nicvf, ver 1.0 Mar 17 18:49:21.050382 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 17 18:49:21.050442 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-03-17T18:49:20 UTC (1742237360) Mar 17 18:49:21.050451 kernel: efifb: probing for efifb Mar 17 18:49:21.050458 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Mar 17 18:49:21.050464 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Mar 17 18:49:21.050471 kernel: efifb: scrolling: redraw Mar 17 18:49:21.050478 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 17 18:49:21.050486 kernel: Console: switching to colour frame buffer device 128x48 Mar 17 18:49:21.050493 kernel: fb0: EFI VGA frame buffer device Mar 17 18:49:21.050499 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Mar 17 18:49:21.050506 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 17 18:49:21.050512 kernel: NET: Registered PF_INET6 protocol family Mar 17 18:49:21.050519 kernel: Segment Routing with IPv6 Mar 17 18:49:21.050526 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 18:49:21.050532 kernel: NET: Registered PF_PACKET protocol family Mar 17 18:49:21.050539 kernel: Key type dns_resolver registered Mar 17 18:49:21.050545 kernel: registered taskstats version 1 Mar 17 18:49:21.050553 kernel: Loading compiled-in X.509 certificates Mar 17 18:49:21.050560 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.179-flatcar: c6f3fb83dc6bb7052b07ec5b1ef41d12f9b3f7e4' Mar 17 18:49:21.050566 kernel: Key type .fscrypt registered Mar 17 18:49:21.050572 kernel: Key type fscrypt-provisioning registered Mar 17 18:49:21.050579 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 18:49:21.050586 kernel: ima: Allocated hash algorithm: sha1 Mar 17 18:49:21.050592 kernel: ima: No architecture policies found Mar 17 18:49:21.050613 kernel: clk: Disabling unused clocks Mar 17 18:49:21.050623 kernel: Freeing unused kernel memory: 36416K Mar 17 18:49:21.050630 kernel: Run /init as init process Mar 17 18:49:21.050637 kernel: with arguments: Mar 17 18:49:21.050643 kernel: /init Mar 17 18:49:21.050650 kernel: with environment: Mar 17 18:49:21.050657 kernel: HOME=/ Mar 17 18:49:21.050663 kernel: TERM=linux Mar 17 18:49:21.050669 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 18:49:21.050678 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Mar 17 18:49:21.050689 systemd[1]: Detected virtualization microsoft. Mar 17 18:49:21.050696 systemd[1]: Detected architecture arm64. Mar 17 18:49:21.050703 systemd[1]: Running in initrd. Mar 17 18:49:21.050710 systemd[1]: No hostname configured, using default hostname. Mar 17 18:49:21.050717 systemd[1]: Hostname set to . Mar 17 18:49:21.050724 systemd[1]: Initializing machine ID from random generator. Mar 17 18:49:21.050735 systemd[1]: Queued start job for default target initrd.target. Mar 17 18:49:21.050744 systemd[1]: Started systemd-ask-password-console.path. Mar 17 18:49:21.050751 systemd[1]: Reached target cryptsetup.target. Mar 17 18:49:21.050758 systemd[1]: Reached target paths.target. Mar 17 18:49:21.050765 systemd[1]: Reached target slices.target. Mar 17 18:49:21.050772 systemd[1]: Reached target swap.target. Mar 17 18:49:21.050780 systemd[1]: Reached target timers.target. Mar 17 18:49:21.050788 systemd[1]: Listening on iscsid.socket. Mar 17 18:49:21.050795 systemd[1]: Listening on iscsiuio.socket. Mar 17 18:49:21.050803 systemd[1]: Listening on systemd-journald-audit.socket. Mar 17 18:49:21.050810 systemd[1]: Listening on systemd-journald-dev-log.socket. Mar 17 18:49:21.050817 systemd[1]: Listening on systemd-journald.socket. Mar 17 18:49:21.050824 systemd[1]: Listening on systemd-networkd.socket. Mar 17 18:49:21.050832 systemd[1]: Listening on systemd-udevd-control.socket. Mar 17 18:49:21.050839 systemd[1]: Listening on systemd-udevd-kernel.socket. Mar 17 18:49:21.050846 systemd[1]: Reached target sockets.target. Mar 17 18:49:21.050853 systemd[1]: Starting kmod-static-nodes.service... Mar 17 18:49:21.050860 systemd[1]: Finished network-cleanup.service. Mar 17 18:49:21.050868 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 18:49:21.050875 systemd[1]: Starting systemd-journald.service... Mar 17 18:49:21.050882 systemd[1]: Starting systemd-modules-load.service... Mar 17 18:49:21.050889 systemd[1]: Starting systemd-resolved.service... Mar 17 18:49:21.050896 systemd[1]: Starting systemd-vconsole-setup.service... Mar 17 18:49:21.050908 systemd-journald[276]: Journal started Mar 17 18:49:21.050955 systemd-journald[276]: Runtime Journal (/run/log/journal/c1ac1db4a29a456a939cbfe21aac7e70) is 8.0M, max 78.5M, 70.5M free. Mar 17 18:49:21.042641 systemd-modules-load[277]: Inserted module 'overlay' Mar 17 18:49:21.086623 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 18:49:21.086675 systemd[1]: Started systemd-journald.service. Mar 17 18:49:21.099495 kernel: Bridge firewalling registered Mar 17 18:49:21.099638 systemd-modules-load[277]: Inserted module 'br_netfilter' Mar 17 18:49:21.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:21.100802 systemd-resolved[278]: Positive Trust Anchors: Mar 17 18:49:21.166114 kernel: audit: type=1130 audit(1742237361.102:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:21.166141 kernel: SCSI subsystem initialized Mar 17 18:49:21.166151 kernel: audit: type=1130 audit(1742237361.132:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:21.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:21.100810 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 18:49:21.209714 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 18:49:21.209738 kernel: audit: type=1130 audit(1742237361.137:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:21.209749 kernel: device-mapper: uevent: version 1.0.3 Mar 17 18:49:21.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:21.100841 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Mar 17 18:49:21.277509 kernel: audit: type=1130 audit(1742237361.214:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:21.277541 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Mar 17 18:49:21.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:21.103354 systemd[1]: Finished kmod-static-nodes.service. Mar 17 18:49:21.282000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:21.110127 systemd-resolved[278]: Defaulting to hostname 'linux'. Mar 17 18:49:21.338284 kernel: audit: type=1130 audit(1742237361.282:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:21.338309 kernel: audit: type=1130 audit(1742237361.306:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:21.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:21.132885 systemd[1]: Started systemd-resolved.service. Mar 17 18:49:21.145678 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 18:49:21.271939 systemd[1]: Finished systemd-vconsole-setup.service. Mar 17 18:49:21.282508 systemd-modules-load[277]: Inserted module 'dm_multipath' Mar 17 18:49:21.396162 kernel: audit: type=1130 audit(1742237361.375:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:21.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:21.299105 systemd[1]: Finished systemd-modules-load.service. Mar 17 18:49:21.306895 systemd[1]: Reached target nss-lookup.target. Mar 17 18:49:21.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:21.333796 systemd[1]: Starting dracut-cmdline-ask.service... Mar 17 18:49:21.340540 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:49:21.459074 kernel: audit: type=1130 audit(1742237361.407:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:21.459102 kernel: audit: type=1130 audit(1742237361.438:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:21.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:21.354881 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Mar 17 18:49:21.370310 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:49:21.477927 dracut-cmdline[298]: dracut-dracut-053 Mar 17 18:49:21.477927 dracut-cmdline[298]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=e034db32d58fe7496a3db6ba3879dd9052cea2cf1597d65edfc7b26afc92530d Mar 17 18:49:21.376371 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Mar 17 18:49:21.429639 systemd[1]: Finished dracut-cmdline-ask.service. Mar 17 18:49:21.442582 systemd[1]: Starting dracut-cmdline.service... Mar 17 18:49:21.555618 kernel: Loading iSCSI transport class v2.0-870. Mar 17 18:49:21.569628 kernel: iscsi: registered transport (tcp) Mar 17 18:49:21.590742 kernel: iscsi: registered transport (qla4xxx) Mar 17 18:49:21.590803 kernel: QLogic iSCSI HBA Driver Mar 17 18:49:21.627671 systemd[1]: Finished dracut-cmdline.service. Mar 17 18:49:21.631000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:21.633534 systemd[1]: Starting dracut-pre-udev.service... Mar 17 18:49:21.689618 kernel: raid6: neonx8 gen() 13820 MB/s Mar 17 18:49:21.710608 kernel: raid6: neonx8 xor() 10831 MB/s Mar 17 18:49:21.731607 kernel: raid6: neonx4 gen() 13573 MB/s Mar 17 18:49:21.754614 kernel: raid6: neonx4 xor() 11227 MB/s Mar 17 18:49:21.775616 kernel: raid6: neonx2 gen() 13094 MB/s Mar 17 18:49:21.796610 kernel: raid6: neonx2 xor() 10492 MB/s Mar 17 18:49:21.818617 kernel: raid6: neonx1 gen() 10507 MB/s Mar 17 18:49:21.838612 kernel: raid6: neonx1 xor() 8754 MB/s Mar 17 18:49:21.859608 kernel: raid6: int64x8 gen() 6275 MB/s Mar 17 18:49:21.881609 kernel: raid6: int64x8 xor() 3543 MB/s Mar 17 18:49:21.901612 kernel: raid6: int64x4 gen() 7209 MB/s Mar 17 18:49:21.922608 kernel: raid6: int64x4 xor() 3855 MB/s Mar 17 18:49:21.944613 kernel: raid6: int64x2 gen() 6152 MB/s Mar 17 18:49:21.965611 kernel: raid6: int64x2 xor() 3323 MB/s Mar 17 18:49:21.986608 kernel: raid6: int64x1 gen() 5043 MB/s Mar 17 18:49:22.015570 kernel: raid6: int64x1 xor() 2645 MB/s Mar 17 18:49:22.015591 kernel: raid6: using algorithm neonx8 gen() 13820 MB/s Mar 17 18:49:22.015623 kernel: raid6: .... xor() 10831 MB/s, rmw enabled Mar 17 18:49:22.020164 kernel: raid6: using neon recovery algorithm Mar 17 18:49:22.041724 kernel: xor: measuring software checksum speed Mar 17 18:49:22.041736 kernel: 8regs : 17231 MB/sec Mar 17 18:49:22.045524 kernel: 32regs : 20712 MB/sec Mar 17 18:49:22.056953 kernel: arm64_neon : 25776 MB/sec Mar 17 18:49:22.056963 kernel: xor: using function: arm64_neon (25776 MB/sec) Mar 17 18:49:22.114617 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Mar 17 18:49:22.125510 systemd[1]: Finished dracut-pre-udev.service. Mar 17 18:49:22.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:22.134000 audit: BPF prog-id=7 op=LOAD Mar 17 18:49:22.135000 audit: BPF prog-id=8 op=LOAD Mar 17 18:49:22.136154 systemd[1]: Starting systemd-udevd.service... Mar 17 18:49:22.151699 systemd-udevd[474]: Using default interface naming scheme 'v252'. Mar 17 18:49:22.158365 systemd[1]: Started systemd-udevd.service. Mar 17 18:49:22.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:22.172366 systemd[1]: Starting dracut-pre-trigger.service... Mar 17 18:49:22.188029 dracut-pre-trigger[495]: rd.md=0: removing MD RAID activation Mar 17 18:49:22.221158 systemd[1]: Finished dracut-pre-trigger.service. Mar 17 18:49:22.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:22.227245 systemd[1]: Starting systemd-udev-trigger.service... Mar 17 18:49:22.264387 systemd[1]: Finished systemd-udev-trigger.service. Mar 17 18:49:22.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:22.328615 kernel: hv_vmbus: Vmbus version:5.3 Mar 17 18:49:22.355886 kernel: hv_vmbus: registering driver hid_hyperv Mar 17 18:49:22.355942 kernel: hv_vmbus: registering driver hv_storvsc Mar 17 18:49:22.355952 kernel: hv_vmbus: registering driver hv_netvsc Mar 17 18:49:22.355961 kernel: hv_vmbus: registering driver hyperv_keyboard Mar 17 18:49:22.355971 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Mar 17 18:49:22.374638 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Mar 17 18:49:22.387830 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Mar 17 18:49:22.387882 kernel: scsi host1: storvsc_host_t Mar 17 18:49:22.394740 kernel: scsi host0: storvsc_host_t Mar 17 18:49:22.401876 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Mar 17 18:49:22.408822 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Mar 17 18:49:22.433632 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Mar 17 18:49:22.455919 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 17 18:49:22.455932 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Mar 17 18:49:22.467085 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Mar 17 18:49:22.467181 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 17 18:49:22.467258 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Mar 17 18:49:22.467339 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Mar 17 18:49:22.467415 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Mar 17 18:49:22.467499 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 18:49:22.467509 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 17 18:49:22.483628 kernel: hv_netvsc 002248c1-6df4-0022-48c1-6df4002248c1 eth0: VF slot 1 added Mar 17 18:49:22.493635 kernel: hv_vmbus: registering driver hv_pci Mar 17 18:49:22.501623 kernel: hv_pci 6247b713-323e-4f84-9199-6f741194600a: PCI VMBus probing: Using version 0x10004 Mar 17 18:49:22.615953 kernel: hv_pci 6247b713-323e-4f84-9199-6f741194600a: PCI host bridge to bus 323e:00 Mar 17 18:49:22.616051 kernel: pci_bus 323e:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Mar 17 18:49:22.616144 kernel: pci_bus 323e:00: No busn resource found for root bus, will use [bus 00-ff] Mar 17 18:49:22.616215 kernel: pci 323e:00:02.0: [15b3:1018] type 00 class 0x020000 Mar 17 18:49:22.616303 kernel: pci 323e:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 17 18:49:22.616380 kernel: pci 323e:00:02.0: enabling Extended Tags Mar 17 18:49:22.616459 kernel: pci 323e:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 323e:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Mar 17 18:49:22.616538 kernel: pci_bus 323e:00: busn_res: [bus 00-ff] end is updated to 00 Mar 17 18:49:22.616643 kernel: pci 323e:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 17 18:49:22.654625 kernel: mlx5_core 323e:00:02.0: firmware version: 16.30.1284 Mar 17 18:49:22.887723 kernel: mlx5_core 323e:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0) Mar 17 18:49:22.887834 kernel: hv_netvsc 002248c1-6df4-0022-48c1-6df4002248c1 eth0: VF registering: eth1 Mar 17 18:49:22.887916 kernel: mlx5_core 323e:00:02.0 eth1: joined to eth0 Mar 17 18:49:22.859091 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Mar 17 18:49:22.909154 kernel: mlx5_core 323e:00:02.0 enP12862s1: renamed from eth1 Mar 17 18:49:22.909348 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (532) Mar 17 18:49:22.924729 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Mar 17 18:49:23.049768 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Mar 17 18:49:23.056906 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Mar 17 18:49:23.071296 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Mar 17 18:49:23.094304 systemd[1]: Starting disk-uuid.service... Mar 17 18:49:23.116650 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 18:49:23.126626 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 18:49:24.135632 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 18:49:24.135691 disk-uuid[604]: The operation has completed successfully. Mar 17 18:49:24.193390 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 18:49:24.195743 systemd[1]: Finished disk-uuid.service. Mar 17 18:49:24.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:24.203000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:24.208050 systemd[1]: Starting verity-setup.service... Mar 17 18:49:24.249820 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 17 18:49:24.468282 systemd[1]: Found device dev-mapper-usr.device. Mar 17 18:49:24.474064 systemd[1]: Mounting sysusr-usr.mount... Mar 17 18:49:24.481870 systemd[1]: Finished verity-setup.service. Mar 17 18:49:24.493000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:24.545623 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Mar 17 18:49:24.546425 systemd[1]: Mounted sysusr-usr.mount. Mar 17 18:49:24.550541 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Mar 17 18:49:24.551351 systemd[1]: Starting ignition-setup.service... Mar 17 18:49:24.572659 systemd[1]: Starting parse-ip-for-networkd.service... Mar 17 18:49:24.597588 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 18:49:24.597656 kernel: BTRFS info (device sda6): using free space tree Mar 17 18:49:24.602528 kernel: BTRFS info (device sda6): has skinny extents Mar 17 18:49:24.662630 systemd[1]: Finished parse-ip-for-networkd.service. Mar 17 18:49:24.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:24.673000 audit: BPF prog-id=9 op=LOAD Mar 17 18:49:24.674205 systemd[1]: Starting systemd-networkd.service... Mar 17 18:49:24.687777 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 18:49:24.704958 systemd-networkd[848]: lo: Link UP Mar 17 18:49:24.704973 systemd-networkd[848]: lo: Gained carrier Mar 17 18:49:24.705730 systemd-networkd[848]: Enumeration completed Mar 17 18:49:24.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:24.705831 systemd[1]: Started systemd-networkd.service. Mar 17 18:49:24.717846 systemd-networkd[848]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 18:49:24.719008 systemd[1]: Reached target network.target. Mar 17 18:49:24.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:24.727937 systemd[1]: Starting iscsiuio.service... Mar 17 18:49:24.745421 systemd[1]: Started iscsiuio.service. Mar 17 18:49:24.769361 iscsid[855]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Mar 17 18:49:24.769361 iscsid[855]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Mar 17 18:49:24.769361 iscsid[855]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Mar 17 18:49:24.769361 iscsid[855]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Mar 17 18:49:24.769361 iscsid[855]: If using hardware iscsi like qla4xxx this message can be ignored. Mar 17 18:49:24.769361 iscsid[855]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Mar 17 18:49:24.769361 iscsid[855]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Mar 17 18:49:24.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:24.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:24.752809 systemd[1]: Starting iscsid.service... Mar 17 18:49:24.770845 systemd[1]: Started iscsid.service. Mar 17 18:49:24.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:24.800394 systemd[1]: Finished ignition-setup.service. Mar 17 18:49:24.890733 kernel: mlx5_core 323e:00:02.0 enP12862s1: Link up Mar 17 18:49:24.825790 systemd[1]: Starting dracut-initqueue.service... Mar 17 18:49:24.846328 systemd[1]: Starting ignition-fetch-offline.service... Mar 17 18:49:24.863450 systemd[1]: Finished dracut-initqueue.service. Mar 17 18:49:24.871621 systemd[1]: Reached target remote-fs-pre.target. Mar 17 18:49:24.883370 systemd[1]: Reached target remote-cryptsetup.target. Mar 17 18:49:24.942819 kernel: hv_netvsc 002248c1-6df4-0022-48c1-6df4002248c1 eth0: Data path switched to VF: enP12862s1 Mar 17 18:49:24.942985 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Mar 17 18:49:24.893895 systemd[1]: Reached target remote-fs.target. Mar 17 18:49:24.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:24.905072 systemd[1]: Starting dracut-pre-mount.service... Mar 17 18:49:24.920560 systemd[1]: Finished dracut-pre-mount.service. Mar 17 18:49:24.944887 systemd-networkd[848]: enP12862s1: Link UP Mar 17 18:49:24.944966 systemd-networkd[848]: eth0: Link UP Mar 17 18:49:24.945080 systemd-networkd[848]: eth0: Gained carrier Mar 17 18:49:24.960842 systemd-networkd[848]: enP12862s1: Gained carrier Mar 17 18:49:24.974669 systemd-networkd[848]: eth0: DHCPv4 address 10.200.20.12/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 17 18:49:26.692779 systemd-networkd[848]: eth0: Gained IPv6LL Mar 17 18:49:27.584928 ignition[862]: Ignition 2.14.0 Mar 17 18:49:27.588317 ignition[862]: Stage: fetch-offline Mar 17 18:49:27.588427 ignition[862]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:49:27.588454 ignition[862]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Mar 17 18:49:27.680015 ignition[862]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 18:49:27.680205 ignition[862]: parsed url from cmdline: "" Mar 17 18:49:27.680209 ignition[862]: no config URL provided Mar 17 18:49:27.680214 ignition[862]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 18:49:27.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:27.693813 systemd[1]: Finished ignition-fetch-offline.service. Mar 17 18:49:27.739406 kernel: kauditd_printk_skb: 18 callbacks suppressed Mar 17 18:49:27.739427 kernel: audit: type=1130 audit(1742237367.702:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:27.680222 ignition[862]: no config at "/usr/lib/ignition/user.ign" Mar 17 18:49:27.713928 systemd[1]: Starting ignition-fetch.service... Mar 17 18:49:27.680228 ignition[862]: failed to fetch config: resource requires networking Mar 17 18:49:27.680720 ignition[862]: Ignition finished successfully Mar 17 18:49:27.721161 ignition[877]: Ignition 2.14.0 Mar 17 18:49:27.721167 ignition[877]: Stage: fetch Mar 17 18:49:27.721285 ignition[877]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:49:27.721306 ignition[877]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Mar 17 18:49:27.724040 ignition[877]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 18:49:27.724167 ignition[877]: parsed url from cmdline: "" Mar 17 18:49:27.724172 ignition[877]: no config URL provided Mar 17 18:49:27.724176 ignition[877]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 18:49:27.724184 ignition[877]: no config at "/usr/lib/ignition/user.ign" Mar 17 18:49:27.724227 ignition[877]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Mar 17 18:49:27.837317 ignition[877]: GET result: OK Mar 17 18:49:27.837415 ignition[877]: config has been read from IMDS userdata Mar 17 18:49:27.841042 unknown[877]: fetched base config from "system" Mar 17 18:49:27.837461 ignition[877]: parsing config with SHA512: ff0b879e3d1282474c58f75c056ab2568bbbdea1ec05de1a73187ec3b1e0a9161a676159a2342fe89e6fea9c82aab9fd41ebe6d34bc3c86b8119225458c31bd3 Mar 17 18:49:27.841050 unknown[877]: fetched base config from "system" Mar 17 18:49:27.880526 kernel: audit: type=1130 audit(1742237367.855:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:27.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:27.841629 ignition[877]: fetch: fetch complete Mar 17 18:49:27.841055 unknown[877]: fetched user config from "azure" Mar 17 18:49:27.841635 ignition[877]: fetch: fetch passed Mar 17 18:49:27.851325 systemd[1]: Finished ignition-fetch.service. Mar 17 18:49:27.841683 ignition[877]: Ignition finished successfully Mar 17 18:49:27.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:27.857040 systemd[1]: Starting ignition-kargs.service... Mar 17 18:49:27.939627 kernel: audit: type=1130 audit(1742237367.909:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:27.889457 ignition[883]: Ignition 2.14.0 Mar 17 18:49:27.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:27.901746 systemd[1]: Finished ignition-kargs.service. Mar 17 18:49:27.977665 kernel: audit: type=1130 audit(1742237367.944:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:27.889464 ignition[883]: Stage: kargs Mar 17 18:49:27.910979 systemd[1]: Starting ignition-disks.service... Mar 17 18:49:27.889580 ignition[883]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:49:27.937963 systemd[1]: Finished ignition-disks.service. Mar 17 18:49:27.889617 ignition[883]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Mar 17 18:49:27.944432 systemd[1]: Reached target initrd-root-device.target. Mar 17 18:49:27.892377 ignition[883]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 18:49:27.972994 systemd[1]: Reached target local-fs-pre.target. Mar 17 18:49:27.896653 ignition[883]: kargs: kargs passed Mar 17 18:49:27.982748 systemd[1]: Reached target local-fs.target. Mar 17 18:49:27.896712 ignition[883]: Ignition finished successfully Mar 17 18:49:27.990197 systemd[1]: Reached target sysinit.target. Mar 17 18:49:27.921635 ignition[889]: Ignition 2.14.0 Mar 17 18:49:27.999320 systemd[1]: Reached target basic.target. Mar 17 18:49:27.921642 ignition[889]: Stage: disks Mar 17 18:49:28.015159 systemd[1]: Starting systemd-fsck-root.service... Mar 17 18:49:27.921770 ignition[889]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:49:27.921796 ignition[889]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Mar 17 18:49:28.075737 systemd-fsck[897]: ROOT: clean, 623/7326000 files, 481077/7359488 blocks Mar 17 18:49:27.925620 ignition[889]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 18:49:28.115968 kernel: audit: type=1130 audit(1742237368.092:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:28.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:28.080694 systemd[1]: Finished systemd-fsck-root.service. Mar 17 18:49:27.928492 ignition[889]: disks: disks passed Mar 17 18:49:28.115256 systemd[1]: Mounting sysroot.mount... Mar 17 18:49:27.928572 ignition[889]: Ignition finished successfully Mar 17 18:49:28.149618 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Mar 17 18:49:28.149764 systemd[1]: Mounted sysroot.mount. Mar 17 18:49:28.153593 systemd[1]: Reached target initrd-root-fs.target. Mar 17 18:49:28.189463 systemd[1]: Mounting sysroot-usr.mount... Mar 17 18:49:28.195114 systemd[1]: Starting flatcar-metadata-hostname.service... Mar 17 18:49:28.207546 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 18:49:28.207594 systemd[1]: Reached target ignition-diskful.target. Mar 17 18:49:28.223553 systemd[1]: Mounted sysroot-usr.mount. Mar 17 18:49:28.274415 systemd[1]: Mounting sysroot-usr-share-oem.mount... Mar 17 18:49:28.279846 systemd[1]: Starting initrd-setup-root.service... Mar 17 18:49:28.310278 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (908) Mar 17 18:49:28.310333 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 18:49:28.310350 initrd-setup-root[913]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 18:49:28.321962 kernel: BTRFS info (device sda6): using free space tree Mar 17 18:49:28.326593 kernel: BTRFS info (device sda6): has skinny extents Mar 17 18:49:28.330538 systemd[1]: Mounted sysroot-usr-share-oem.mount. Mar 17 18:49:28.348168 initrd-setup-root[939]: cut: /sysroot/etc/group: No such file or directory Mar 17 18:49:28.374379 initrd-setup-root[947]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 18:49:28.384419 initrd-setup-root[955]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 18:49:28.804633 systemd[1]: Finished initrd-setup-root.service. Mar 17 18:49:28.835711 kernel: audit: type=1130 audit(1742237368.809:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:28.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:28.811230 systemd[1]: Starting ignition-mount.service... Mar 17 18:49:28.840763 systemd[1]: Starting sysroot-boot.service... Mar 17 18:49:28.847898 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Mar 17 18:49:28.847994 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Mar 17 18:49:28.878220 systemd[1]: Finished sysroot-boot.service. Mar 17 18:49:28.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:28.902950 ignition[975]: INFO : Ignition 2.14.0 Mar 17 18:49:28.902950 ignition[975]: INFO : Stage: mount Mar 17 18:49:28.902950 ignition[975]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:49:28.902950 ignition[975]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Mar 17 18:49:28.958218 kernel: audit: type=1130 audit(1742237368.882:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:28.958245 kernel: audit: type=1130 audit(1742237368.928:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:28.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:28.916866 systemd[1]: Finished ignition-mount.service. Mar 17 18:49:28.969445 ignition[975]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 18:49:28.969445 ignition[975]: INFO : mount: mount passed Mar 17 18:49:28.969445 ignition[975]: INFO : Ignition finished successfully Mar 17 18:49:29.464792 coreos-metadata[907]: Mar 17 18:49:29.464 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 17 18:49:29.475979 coreos-metadata[907]: Mar 17 18:49:29.475 INFO Fetch successful Mar 17 18:49:29.509194 coreos-metadata[907]: Mar 17 18:49:29.509 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Mar 17 18:49:29.529636 coreos-metadata[907]: Mar 17 18:49:29.529 INFO Fetch successful Mar 17 18:49:29.544624 coreos-metadata[907]: Mar 17 18:49:29.543 INFO wrote hostname ci-3510.3.7-a-c36c8d7be6 to /sysroot/etc/hostname Mar 17 18:49:29.553434 systemd[1]: Finished flatcar-metadata-hostname.service. Mar 17 18:49:29.558000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:29.585644 kernel: audit: type=1130 audit(1742237369.558:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:29.559936 systemd[1]: Starting ignition-files.service... Mar 17 18:49:29.590586 systemd[1]: Mounting sysroot-usr-share-oem.mount... Mar 17 18:49:29.611641 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (986) Mar 17 18:49:29.625082 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 18:49:29.625124 kernel: BTRFS info (device sda6): using free space tree Mar 17 18:49:29.625134 kernel: BTRFS info (device sda6): has skinny extents Mar 17 18:49:29.634446 systemd[1]: Mounted sysroot-usr-share-oem.mount. Mar 17 18:49:29.649306 ignition[1005]: INFO : Ignition 2.14.0 Mar 17 18:49:29.649306 ignition[1005]: INFO : Stage: files Mar 17 18:49:29.663030 ignition[1005]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:49:29.663030 ignition[1005]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Mar 17 18:49:29.663030 ignition[1005]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 18:49:29.663030 ignition[1005]: DEBUG : files: compiled without relabeling support, skipping Mar 17 18:49:29.663030 ignition[1005]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 18:49:29.663030 ignition[1005]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 18:49:29.758034 ignition[1005]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 18:49:29.767181 ignition[1005]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 18:49:29.767181 ignition[1005]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 18:49:29.766588 unknown[1005]: wrote ssh authorized keys file for user: core Mar 17 18:49:29.789225 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 17 18:49:29.789225 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 17 18:49:29.789225 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 17 18:49:29.789225 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Mar 17 18:49:29.880317 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 17 18:49:30.003297 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 17 18:49:30.015055 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 17 18:49:30.015055 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 18:49:30.015055 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 17 18:49:30.015055 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 17 18:49:30.015055 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 18:49:30.015055 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 18:49:30.015055 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 18:49:30.015055 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 18:49:30.015055 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 18:49:30.015055 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 18:49:30.015055 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 18:49:30.015055 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 18:49:30.157048 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Mar 17 18:49:30.157048 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition Mar 17 18:49:30.157048 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2303965944" Mar 17 18:49:30.157048 ignition[1005]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2303965944": device or resource busy Mar 17 18:49:30.157048 ignition[1005]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2303965944", trying btrfs: device or resource busy Mar 17 18:49:30.157048 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2303965944" Mar 17 18:49:30.157048 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2303965944" Mar 17 18:49:30.157048 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem2303965944" Mar 17 18:49:30.157048 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem2303965944" Mar 17 18:49:30.157048 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Mar 17 18:49:30.157048 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Mar 17 18:49:30.157048 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(f): oem config not found in "/usr/share/oem", looking on oem partition Mar 17 18:49:30.157048 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(10): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1123968285" Mar 17 18:49:30.157048 ignition[1005]: CRITICAL : files: createFilesystemsFiles: createFiles: op(f): op(10): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1123968285": device or resource busy Mar 17 18:49:30.038261 systemd[1]: mnt-oem2303965944.mount: Deactivated successfully. Mar 17 18:49:30.341039 ignition[1005]: ERROR : files: createFilesystemsFiles: createFiles: op(f): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1123968285", trying btrfs: device or resource busy Mar 17 18:49:30.341039 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1123968285" Mar 17 18:49:30.341039 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1123968285" Mar 17 18:49:30.341039 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [started] unmounting "/mnt/oem1123968285" Mar 17 18:49:30.341039 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [finished] unmounting "/mnt/oem1123968285" Mar 17 18:49:30.341039 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Mar 17 18:49:30.341039 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 18:49:30.341039 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Mar 17 18:49:30.064738 systemd[1]: mnt-oem1123968285.mount: Deactivated successfully. Mar 17 18:49:30.447640 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET result: OK Mar 17 18:49:30.633388 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 18:49:30.633388 ignition[1005]: INFO : files: op(14): [started] processing unit "waagent.service" Mar 17 18:49:30.633388 ignition[1005]: INFO : files: op(14): [finished] processing unit "waagent.service" Mar 17 18:49:30.633388 ignition[1005]: INFO : files: op(15): [started] processing unit "nvidia.service" Mar 17 18:49:30.633388 ignition[1005]: INFO : files: op(15): [finished] processing unit "nvidia.service" Mar 17 18:49:30.633388 ignition[1005]: INFO : files: op(16): [started] processing unit "containerd.service" Mar 17 18:49:30.720110 kernel: audit: type=1130 audit(1742237370.658:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:30.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:30.720241 ignition[1005]: INFO : files: op(16): op(17): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 17 18:49:30.720241 ignition[1005]: INFO : files: op(16): op(17): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 17 18:49:30.720241 ignition[1005]: INFO : files: op(16): [finished] processing unit "containerd.service" Mar 17 18:49:30.720241 ignition[1005]: INFO : files: op(18): [started] processing unit "prepare-helm.service" Mar 17 18:49:30.720241 ignition[1005]: INFO : files: op(18): op(19): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 18:49:30.720241 ignition[1005]: INFO : files: op(18): op(19): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 18:49:30.720241 ignition[1005]: INFO : files: op(18): [finished] processing unit "prepare-helm.service" Mar 17 18:49:30.720241 ignition[1005]: INFO : files: op(1a): [started] setting preset to enabled for "nvidia.service" Mar 17 18:49:30.720241 ignition[1005]: INFO : files: op(1a): [finished] setting preset to enabled for "nvidia.service" Mar 17 18:49:30.720241 ignition[1005]: INFO : files: op(1b): [started] setting preset to enabled for "prepare-helm.service" Mar 17 18:49:30.720241 ignition[1005]: INFO : files: op(1b): [finished] setting preset to enabled for "prepare-helm.service" Mar 17 18:49:30.720241 ignition[1005]: INFO : files: op(1c): [started] setting preset to enabled for "waagent.service" Mar 17 18:49:30.720241 ignition[1005]: INFO : files: op(1c): [finished] setting preset to enabled for "waagent.service" Mar 17 18:49:30.720241 ignition[1005]: INFO : files: createResultFile: createFiles: op(1d): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 18:49:30.720241 ignition[1005]: INFO : files: createResultFile: createFiles: op(1d): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 18:49:30.720241 ignition[1005]: INFO : files: files passed Mar 17 18:49:30.720241 ignition[1005]: INFO : Ignition finished successfully Mar 17 18:49:30.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:30.754000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:30.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:30.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:30.833000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:30.647080 systemd[1]: Finished ignition-files.service. Mar 17 18:49:30.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:30.662292 systemd[1]: Starting initrd-setup-root-after-ignition.service... Mar 17 18:49:30.947330 initrd-setup-root-after-ignition[1031]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 18:49:30.689142 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Mar 17 18:49:30.696148 systemd[1]: Starting ignition-quench.service... Mar 17 18:49:30.720438 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 18:49:30.720558 systemd[1]: Finished ignition-quench.service. Mar 17 18:49:30.771596 systemd[1]: Finished initrd-setup-root-after-ignition.service. Mar 17 18:49:31.008000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:30.778279 systemd[1]: Reached target ignition-complete.target. Mar 17 18:49:30.796461 systemd[1]: Starting initrd-parse-etc.service... Mar 17 18:49:30.824355 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 18:49:30.824469 systemd[1]: Finished initrd-parse-etc.service. Mar 17 18:49:30.834431 systemd[1]: Reached target initrd-fs.target. Mar 17 18:49:30.847155 systemd[1]: Reached target initrd.target. Mar 17 18:49:30.860211 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Mar 17 18:49:30.861203 systemd[1]: Starting dracut-pre-pivot.service... Mar 17 18:49:30.920503 systemd[1]: Finished dracut-pre-pivot.service. Mar 17 18:49:30.931304 systemd[1]: Starting initrd-cleanup.service... Mar 17 18:49:30.956148 systemd[1]: Stopped target nss-lookup.target. Mar 17 18:49:30.976746 systemd[1]: Stopped target remote-cryptsetup.target. Mar 17 18:49:31.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:30.987307 systemd[1]: Stopped target timers.target. Mar 17 18:49:30.999554 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 18:49:31.155000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:30.999741 systemd[1]: Stopped dracut-pre-pivot.service. Mar 17 18:49:31.169000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:31.008720 systemd[1]: Stopped target initrd.target. Mar 17 18:49:31.179000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:31.018525 systemd[1]: Stopped target basic.target. Mar 17 18:49:31.189000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:31.028495 systemd[1]: Stopped target ignition-complete.target. Mar 17 18:49:31.039264 systemd[1]: Stopped target ignition-diskful.target. Mar 17 18:49:31.050376 systemd[1]: Stopped target initrd-root-device.target. Mar 17 18:49:31.224000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:31.225998 ignition[1044]: INFO : Ignition 2.14.0 Mar 17 18:49:31.225998 ignition[1044]: INFO : Stage: umount Mar 17 18:49:31.225998 ignition[1044]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:49:31.225998 ignition[1044]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Mar 17 18:49:31.225998 ignition[1044]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 18:49:31.225998 ignition[1044]: INFO : umount: umount passed Mar 17 18:49:31.225998 ignition[1044]: INFO : Ignition finished successfully Mar 17 18:49:31.239000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:31.245000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:31.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:31.273000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:31.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:31.060761 systemd[1]: Stopped target remote-fs.target. Mar 17 18:49:31.320000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:31.070174 systemd[1]: Stopped target remote-fs-pre.target. Mar 17 18:49:31.336000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:31.080398 systemd[1]: Stopped target sysinit.target. Mar 17 18:49:31.093415 systemd[1]: Stopped target local-fs.target. Mar 17 18:49:31.103571 systemd[1]: Stopped target local-fs-pre.target. Mar 17 18:49:31.114691 systemd[1]: Stopped target swap.target. Mar 17 18:49:31.124564 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 18:49:31.124752 systemd[1]: Stopped dracut-pre-mount.service. Mar 17 18:49:31.134440 systemd[1]: Stopped target cryptsetup.target. Mar 17 18:49:31.423000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:31.145833 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 18:49:31.145997 systemd[1]: Stopped dracut-initqueue.service. Mar 17 18:49:31.156503 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 18:49:31.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:31.459000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:31.156690 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Mar 17 18:49:31.471000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:31.170065 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 18:49:31.482000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:31.483000 audit: BPF prog-id=6 op=UNLOAD Mar 17 18:49:31.170208 systemd[1]: Stopped ignition-files.service. Mar 17 18:49:31.179861 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 17 18:49:31.507000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:31.180008 systemd[1]: Stopped flatcar-metadata-hostname.service. Mar 17 18:49:31.518000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:31.191575 systemd[1]: Stopping ignition-mount.service... Mar 17 18:49:31.528000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:31.213798 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 18:49:31.213986 systemd[1]: Stopped kmod-static-nodes.service. Mar 17 18:49:31.550000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:31.226099 systemd[1]: Stopping sysroot-boot.service... Mar 17 18:49:31.230287 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 18:49:31.575000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:31.230473 systemd[1]: Stopped systemd-udev-trigger.service. Mar 17 18:49:31.586000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:31.240456 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 18:49:31.595000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:31.240569 systemd[1]: Stopped dracut-pre-trigger.service. Mar 17 18:49:31.622000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:31.247406 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 18:49:31.641157 kernel: hv_netvsc 002248c1-6df4-0022-48c1-6df4002248c1 eth0: Data path switched from VF: enP12862s1 Mar 17 18:49:31.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:31.634000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:31.247519 systemd[1]: Stopped ignition-mount.service. Mar 17 18:49:31.258817 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 18:49:31.258925 systemd[1]: Stopped ignition-disks.service. Mar 17 18:49:31.273683 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 18:49:31.273787 systemd[1]: Stopped ignition-kargs.service. Mar 17 18:49:31.303228 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 17 18:49:31.303334 systemd[1]: Stopped ignition-fetch.service. Mar 17 18:49:31.321091 systemd[1]: Stopped target network.target. Mar 17 18:49:31.326439 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 18:49:31.326560 systemd[1]: Stopped ignition-fetch-offline.service. Mar 17 18:49:31.337167 systemd[1]: Stopped target paths.target. Mar 17 18:49:31.348697 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 18:49:31.353634 systemd[1]: Stopped systemd-ask-password-console.path. Mar 17 18:49:31.377700 systemd[1]: Stopped target slices.target. Mar 17 18:49:31.382918 systemd[1]: Stopped target sockets.target. Mar 17 18:49:31.393013 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 18:49:31.393205 systemd[1]: Closed iscsid.socket. Mar 17 18:49:31.734000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:31.401141 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 18:49:31.401265 systemd[1]: Closed iscsiuio.socket. Mar 17 18:49:31.414247 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 18:49:31.414398 systemd[1]: Stopped ignition-setup.service. Mar 17 18:49:31.424372 systemd[1]: Stopping systemd-networkd.service... Mar 17 18:49:31.435720 systemd[1]: Stopping systemd-resolved.service... Mar 17 18:49:31.446645 systemd-networkd[848]: eth0: DHCPv6 lease lost Mar 17 18:49:31.767000 audit: BPF prog-id=9 op=UNLOAD Mar 17 18:49:31.448815 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 18:49:31.450567 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 18:49:31.450686 systemd[1]: Finished initrd-cleanup.service. Mar 17 18:49:31.459881 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 18:49:31.459983 systemd[1]: Stopped systemd-networkd.service. Mar 17 18:49:31.472636 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 18:49:31.472745 systemd[1]: Stopped systemd-resolved.service. Mar 17 18:49:31.484162 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 18:49:31.484207 systemd[1]: Closed systemd-networkd.socket. Mar 17 18:49:31.493975 systemd[1]: Stopping network-cleanup.service... Mar 17 18:49:31.502588 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 18:49:31.502678 systemd[1]: Stopped parse-ip-for-networkd.service. Mar 17 18:49:31.507730 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 18:49:31.507783 systemd[1]: Stopped systemd-sysctl.service. Mar 17 18:49:31.523802 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 18:49:31.523861 systemd[1]: Stopped systemd-modules-load.service. Mar 17 18:49:31.529628 systemd[1]: Stopping systemd-udevd.service... Mar 17 18:49:31.541251 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 17 18:49:31.541880 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 18:49:31.542011 systemd[1]: Stopped systemd-udevd.service. Mar 17 18:49:31.551516 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 18:49:31.551578 systemd[1]: Closed systemd-udevd-control.socket. Mar 17 18:49:31.560393 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 18:49:31.560434 systemd[1]: Closed systemd-udevd-kernel.socket. Mar 17 18:49:31.566489 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 18:49:31.566552 systemd[1]: Stopped dracut-pre-udev.service. Mar 17 18:49:31.575936 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 18:49:31.575991 systemd[1]: Stopped dracut-cmdline.service. Mar 17 18:49:31.928000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:31.586371 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 18:49:31.586421 systemd[1]: Stopped dracut-cmdline-ask.service. Mar 17 18:49:31.950000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:31.597005 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Mar 17 18:49:31.607387 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 18:49:31.607453 systemd[1]: Stopped systemd-vconsole-setup.service. Mar 17 18:49:31.624715 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 18:49:31.624821 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Mar 17 18:49:31.724402 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 18:49:31.724522 systemd[1]: Stopped network-cleanup.service. Mar 17 18:49:32.001000 audit: BPF prog-id=5 op=UNLOAD Mar 17 18:49:32.001000 audit: BPF prog-id=4 op=UNLOAD Mar 17 18:49:32.001000 audit: BPF prog-id=3 op=UNLOAD Mar 17 18:49:32.001000 audit: BPF prog-id=8 op=UNLOAD Mar 17 18:49:32.007000 audit: BPF prog-id=7 op=UNLOAD Mar 17 18:49:31.918172 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 18:49:31.918287 systemd[1]: Stopped sysroot-boot.service. Mar 17 18:49:31.929349 systemd[1]: Reached target initrd-switch-root.target. Mar 17 18:49:31.939711 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 18:49:31.939798 systemd[1]: Stopped initrd-setup-root.service. Mar 17 18:49:31.951502 systemd[1]: Starting initrd-switch-root.service... Mar 17 18:49:32.004296 systemd[1]: Switching root. Mar 17 18:49:32.051262 iscsid[855]: iscsid shutting down. Mar 17 18:49:32.051343 systemd-journald[276]: Journal stopped Mar 17 18:49:45.724353 systemd-journald[276]: Received SIGTERM from PID 1 (n/a). Mar 17 18:49:45.724375 kernel: SELinux: Class mctp_socket not defined in policy. Mar 17 18:49:45.724385 kernel: SELinux: Class anon_inode not defined in policy. Mar 17 18:49:45.724396 kernel: SELinux: the above unknown classes and permissions will be allowed Mar 17 18:49:45.724406 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 18:49:45.724413 kernel: SELinux: policy capability open_perms=1 Mar 17 18:49:45.724422 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 18:49:45.724430 kernel: SELinux: policy capability always_check_network=0 Mar 17 18:49:45.724438 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 18:49:45.724446 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 18:49:45.724455 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 18:49:45.724463 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 18:49:45.724470 kernel: kauditd_printk_skb: 46 callbacks suppressed Mar 17 18:49:45.724479 kernel: audit: type=1403 audit(1742237375.397:85): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 18:49:45.724489 systemd[1]: Successfully loaded SELinux policy in 336.135ms. Mar 17 18:49:45.724499 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.169ms. Mar 17 18:49:45.724510 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Mar 17 18:49:45.724519 systemd[1]: Detected virtualization microsoft. Mar 17 18:49:45.724527 systemd[1]: Detected architecture arm64. Mar 17 18:49:45.724536 systemd[1]: Detected first boot. Mar 17 18:49:45.724545 systemd[1]: Hostname set to . Mar 17 18:49:45.724554 systemd[1]: Initializing machine ID from random generator. Mar 17 18:49:45.724563 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Mar 17 18:49:45.724573 kernel: audit: type=1400 audit(1742237378.509:86): avc: denied { associate } for pid=1096 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Mar 17 18:49:45.724583 kernel: audit: type=1300 audit(1742237378.509:86): arch=c00000b7 syscall=5 success=yes exit=0 a0=40000225f2 a1=4000028810 a2=40000266c0 a3=32 items=0 ppid=1079 pid=1096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:49:45.724594 kernel: audit: type=1327 audit(1742237378.509:86): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Mar 17 18:49:45.724618 kernel: audit: type=1400 audit(1742237378.546:87): avc: denied { associate } for pid=1096 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Mar 17 18:49:45.724630 kernel: audit: type=1300 audit(1742237378.546:87): arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40000226c9 a2=1ed a3=0 items=2 ppid=1079 pid=1096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:49:45.724639 kernel: audit: type=1307 audit(1742237378.546:87): cwd="/" Mar 17 18:49:45.724648 kernel: audit: type=1302 audit(1742237378.546:87): item=0 name=(null) inode=2 dev=00:2a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:49:45.724657 kernel: audit: type=1302 audit(1742237378.546:87): item=1 name=(null) inode=3 dev=00:2a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:49:45.724666 kernel: audit: type=1327 audit(1742237378.546:87): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Mar 17 18:49:45.724675 systemd[1]: Populated /etc with preset unit settings. Mar 17 18:49:45.724685 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:49:45.724695 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:49:45.724705 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:49:45.724714 systemd[1]: Queued start job for default target multi-user.target. Mar 17 18:49:45.724723 systemd[1]: Unnecessary job was removed for dev-sda6.device. Mar 17 18:49:45.724733 systemd[1]: Created slice system-addon\x2dconfig.slice. Mar 17 18:49:45.724742 systemd[1]: Created slice system-addon\x2drun.slice. Mar 17 18:49:45.724753 systemd[1]: Created slice system-getty.slice. Mar 17 18:49:45.724763 systemd[1]: Created slice system-modprobe.slice. Mar 17 18:49:45.724773 systemd[1]: Created slice system-serial\x2dgetty.slice. Mar 17 18:49:45.724782 systemd[1]: Created slice system-system\x2dcloudinit.slice. Mar 17 18:49:45.724791 systemd[1]: Created slice system-systemd\x2dfsck.slice. Mar 17 18:49:45.724801 systemd[1]: Created slice user.slice. Mar 17 18:49:45.724811 systemd[1]: Started systemd-ask-password-console.path. Mar 17 18:49:45.724820 systemd[1]: Started systemd-ask-password-wall.path. Mar 17 18:49:45.724829 systemd[1]: Set up automount boot.automount. Mar 17 18:49:45.724838 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Mar 17 18:49:45.724849 systemd[1]: Reached target integritysetup.target. Mar 17 18:49:45.724858 systemd[1]: Reached target remote-cryptsetup.target. Mar 17 18:49:45.724867 systemd[1]: Reached target remote-fs.target. Mar 17 18:49:45.724876 systemd[1]: Reached target slices.target. Mar 17 18:49:45.724885 systemd[1]: Reached target swap.target. Mar 17 18:49:45.724894 systemd[1]: Reached target torcx.target. Mar 17 18:49:45.724903 systemd[1]: Reached target veritysetup.target. Mar 17 18:49:45.724913 systemd[1]: Listening on systemd-coredump.socket. Mar 17 18:49:45.724923 systemd[1]: Listening on systemd-initctl.socket. Mar 17 18:49:45.724932 kernel: audit: type=1400 audit(1742237385.301:88): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Mar 17 18:49:45.724941 systemd[1]: Listening on systemd-journald-audit.socket. Mar 17 18:49:45.724950 kernel: audit: type=1335 audit(1742237385.307:89): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Mar 17 18:49:45.724959 systemd[1]: Listening on systemd-journald-dev-log.socket. Mar 17 18:49:45.724968 systemd[1]: Listening on systemd-journald.socket. Mar 17 18:49:45.724977 systemd[1]: Listening on systemd-networkd.socket. Mar 17 18:49:45.724986 systemd[1]: Listening on systemd-udevd-control.socket. Mar 17 18:49:45.724997 systemd[1]: Listening on systemd-udevd-kernel.socket. Mar 17 18:49:45.725007 systemd[1]: Listening on systemd-userdbd.socket. Mar 17 18:49:45.725017 systemd[1]: Mounting dev-hugepages.mount... Mar 17 18:49:45.725026 systemd[1]: Mounting dev-mqueue.mount... Mar 17 18:49:45.725037 systemd[1]: Mounting media.mount... Mar 17 18:49:45.725046 systemd[1]: Mounting sys-kernel-debug.mount... Mar 17 18:49:45.725055 systemd[1]: Mounting sys-kernel-tracing.mount... Mar 17 18:49:45.725065 systemd[1]: Mounting tmp.mount... Mar 17 18:49:45.725074 systemd[1]: Starting flatcar-tmpfiles.service... Mar 17 18:49:45.725083 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:49:45.725093 systemd[1]: Starting kmod-static-nodes.service... Mar 17 18:49:45.725102 systemd[1]: Starting modprobe@configfs.service... Mar 17 18:49:45.725111 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:49:45.725121 systemd[1]: Starting modprobe@drm.service... Mar 17 18:49:45.725131 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:49:45.725140 systemd[1]: Starting modprobe@fuse.service... Mar 17 18:49:45.725149 systemd[1]: Starting modprobe@loop.service... Mar 17 18:49:45.725159 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 18:49:45.725168 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Mar 17 18:49:45.725177 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Mar 17 18:49:45.725186 systemd[1]: Starting systemd-journald.service... Mar 17 18:49:45.725195 kernel: loop: module loaded Mar 17 18:49:45.725206 kernel: fuse: init (API version 7.34) Mar 17 18:49:45.725215 systemd[1]: Starting systemd-modules-load.service... Mar 17 18:49:45.725225 systemd[1]: Starting systemd-network-generator.service... Mar 17 18:49:45.725234 systemd[1]: Starting systemd-remount-fs.service... Mar 17 18:49:45.725244 systemd[1]: Starting systemd-udev-trigger.service... Mar 17 18:49:45.725253 kernel: audit: type=1305 audit(1742237385.721:90): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Mar 17 18:49:45.725267 systemd-journald[1219]: Journal started Mar 17 18:49:45.725310 systemd-journald[1219]: Runtime Journal (/run/log/journal/b3b0226c5fce4403acb281e8af1ef933) is 8.0M, max 78.5M, 70.5M free. Mar 17 18:49:45.307000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Mar 17 18:49:45.721000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Mar 17 18:49:45.755560 systemd[1]: Started systemd-journald.service. Mar 17 18:49:45.755649 kernel: audit: type=1300 audit(1742237385.721:90): arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffc29ed910 a2=4000 a3=1 items=0 ppid=1 pid=1219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:49:45.721000 audit[1219]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffc29ed910 a2=4000 a3=1 items=0 ppid=1 pid=1219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:49:45.721000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Mar 17 18:49:45.770615 kernel: audit: type=1327 audit(1742237385.721:90): proctitle="/usr/lib/systemd/systemd-journald" Mar 17 18:49:45.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:45.783836 systemd[1]: Mounted dev-hugepages.mount. Mar 17 18:49:45.804614 kernel: audit: type=1130 audit(1742237385.782:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:45.812005 systemd[1]: Mounted dev-mqueue.mount. Mar 17 18:49:45.816783 systemd[1]: Mounted media.mount. Mar 17 18:49:45.820772 systemd[1]: Mounted sys-kernel-debug.mount. Mar 17 18:49:45.826067 systemd[1]: Mounted sys-kernel-tracing.mount. Mar 17 18:49:45.831264 systemd[1]: Mounted tmp.mount. Mar 17 18:49:45.835250 systemd[1]: Finished flatcar-tmpfiles.service. Mar 17 18:49:45.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:45.842021 systemd[1]: Finished kmod-static-nodes.service. Mar 17 18:49:45.866984 kernel: audit: type=1130 audit(1742237385.839:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:45.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:45.868188 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 18:49:45.887858 systemd[1]: Finished modprobe@configfs.service. Mar 17 18:49:45.893486 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:49:45.893677 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:49:45.899309 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 18:49:45.899475 systemd[1]: Finished modprobe@drm.service. Mar 17 18:49:45.904953 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:49:45.905113 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:49:45.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:45.932568 kernel: audit: type=1130 audit(1742237385.867:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:45.932639 kernel: audit: type=1130 audit(1742237385.892:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:45.892000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:45.953204 kernel: audit: type=1131 audit(1742237385.892:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:45.952712 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 18:49:45.952897 systemd[1]: Finished modprobe@fuse.service. Mar 17 18:49:45.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:45.898000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:45.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:45.904000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:45.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:45.932000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:45.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:45.958000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:45.959257 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:49:45.959514 systemd[1]: Finished modprobe@loop.service. Mar 17 18:49:45.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:45.963000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:45.964859 systemd[1]: Finished systemd-modules-load.service. Mar 17 18:49:45.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:45.970538 systemd[1]: Finished systemd-network-generator.service. Mar 17 18:49:45.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:45.978088 systemd[1]: Finished systemd-remount-fs.service. Mar 17 18:49:45.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:45.983771 systemd[1]: Finished systemd-udev-trigger.service. Mar 17 18:49:45.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:45.990700 systemd[1]: Reached target network-pre.target. Mar 17 18:49:45.999260 systemd[1]: Mounting sys-fs-fuse-connections.mount... Mar 17 18:49:46.005408 systemd[1]: Mounting sys-kernel-config.mount... Mar 17 18:49:46.009758 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 18:49:46.024729 systemd[1]: Starting systemd-hwdb-update.service... Mar 17 18:49:46.030874 systemd[1]: Starting systemd-journal-flush.service... Mar 17 18:49:46.035810 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:49:46.037256 systemd[1]: Starting systemd-random-seed.service... Mar 17 18:49:46.041949 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:49:46.043340 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:49:46.048930 systemd[1]: Starting systemd-sysusers.service... Mar 17 18:49:46.055178 systemd[1]: Starting systemd-udev-settle.service... Mar 17 18:49:46.062224 systemd[1]: Mounted sys-fs-fuse-connections.mount. Mar 17 18:49:46.067803 systemd[1]: Mounted sys-kernel-config.mount. Mar 17 18:49:46.083022 udevadm[1248]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 17 18:49:46.091539 systemd[1]: Finished systemd-random-seed.service. Mar 17 18:49:46.097423 systemd-journald[1219]: Time spent on flushing to /var/log/journal/b3b0226c5fce4403acb281e8af1ef933 is 13.448ms for 1019 entries. Mar 17 18:49:46.097423 systemd-journald[1219]: System Journal (/var/log/journal/b3b0226c5fce4403acb281e8af1ef933) is 8.0M, max 2.6G, 2.6G free. Mar 17 18:49:46.163862 systemd-journald[1219]: Received client request to flush runtime journal. Mar 17 18:49:46.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:46.097092 systemd[1]: Reached target first-boot-complete.target. Mar 17 18:49:46.164982 systemd[1]: Finished systemd-journal-flush.service. Mar 17 18:49:46.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:46.175878 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:49:46.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:46.645009 systemd[1]: Finished systemd-sysusers.service. Mar 17 18:49:46.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:46.652223 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Mar 17 18:49:47.024747 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Mar 17 18:49:47.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:47.068685 systemd[1]: Finished systemd-hwdb-update.service. Mar 17 18:49:47.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:47.076084 systemd[1]: Starting systemd-udevd.service... Mar 17 18:49:47.096433 systemd-udevd[1259]: Using default interface naming scheme 'v252'. Mar 17 18:49:47.315438 systemd[1]: Started systemd-udevd.service. Mar 17 18:49:47.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:47.328263 systemd[1]: Starting systemd-networkd.service... Mar 17 18:49:47.364362 systemd[1]: Found device dev-ttyAMA0.device. Mar 17 18:49:47.406303 systemd[1]: Starting systemd-userdbd.service... Mar 17 18:49:47.455649 kernel: mousedev: PS/2 mouse device common for all mice Mar 17 18:49:47.463141 systemd[1]: Started systemd-userdbd.service. Mar 17 18:49:47.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:47.489000 audit[1268]: AVC avc: denied { confidentiality } for pid=1268 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Mar 17 18:49:47.531057 kernel: hv_vmbus: registering driver hv_balloon Mar 17 18:49:47.531170 kernel: hv_vmbus: registering driver hyperv_fb Mar 17 18:49:47.531225 kernel: hv_utils: Registering HyperV Utility Driver Mar 17 18:49:47.547375 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Mar 17 18:49:47.547453 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Mar 17 18:49:47.564635 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Mar 17 18:49:47.564719 kernel: hv_vmbus: registering driver hv_utils Mar 17 18:49:47.576935 kernel: hv_balloon: Memory hot add disabled on ARM64 Mar 17 18:49:47.577054 kernel: Console: switching to colour dummy device 80x25 Mar 17 18:49:47.588119 kernel: hv_utils: Heartbeat IC version 3.0 Mar 17 18:49:47.588212 kernel: hv_utils: Shutdown IC version 3.2 Mar 17 18:49:47.592667 kernel: Console: switching to colour frame buffer device 128x48 Mar 17 18:49:47.592783 kernel: hv_utils: TimeSync IC version 4.0 Mar 17 18:49:47.489000 audit[1268]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=aaaadfc17eb0 a1=aa2c a2=ffff81f324b0 a3=aaaadf975010 items=12 ppid=1259 pid=1268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:49:47.489000 audit: CWD cwd="/" Mar 17 18:49:47.489000 audit: PATH item=0 name=(null) inode=6125 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:49:47.489000 audit: PATH item=1 name=(null) inode=10039 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:49:47.489000 audit: PATH item=2 name=(null) inode=10039 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:49:47.489000 audit: PATH item=3 name=(null) inode=10040 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:49:47.489000 audit: PATH item=4 name=(null) inode=10039 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:49:47.489000 audit: PATH item=5 name=(null) inode=10041 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:49:47.489000 audit: PATH item=6 name=(null) inode=10039 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:49:47.489000 audit: PATH item=7 name=(null) inode=10042 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:49:47.489000 audit: PATH item=8 name=(null) inode=10039 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:49:47.489000 audit: PATH item=9 name=(null) inode=10043 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:49:47.489000 audit: PATH item=10 name=(null) inode=10039 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:49:47.489000 audit: PATH item=11 name=(null) inode=10044 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:49:47.489000 audit: PROCTITLE proctitle="(udev-worker)" Mar 17 18:49:47.832959 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Mar 17 18:49:47.840098 systemd[1]: Finished systemd-udev-settle.service. Mar 17 18:49:47.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:47.846905 systemd[1]: Starting lvm2-activation-early.service... Mar 17 18:49:48.074311 lvm[1337]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 18:49:48.094615 systemd-networkd[1280]: lo: Link UP Mar 17 18:49:48.094927 systemd-networkd[1280]: lo: Gained carrier Mar 17 18:49:48.095505 systemd-networkd[1280]: Enumeration completed Mar 17 18:49:48.095845 systemd[1]: Started systemd-networkd.service. Mar 17 18:49:48.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:48.103178 systemd[1]: Starting systemd-networkd-wait-online.service... Mar 17 18:49:48.117304 systemd-networkd[1280]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 18:49:48.123277 systemd[1]: Finished lvm2-activation-early.service. Mar 17 18:49:48.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:48.129546 systemd[1]: Reached target cryptsetup.target. Mar 17 18:49:48.136461 systemd[1]: Starting lvm2-activation.service... Mar 17 18:49:48.141087 lvm[1340]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 18:49:48.168015 systemd[1]: Finished lvm2-activation.service. Mar 17 18:49:48.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:48.173983 systemd[1]: Reached target local-fs-pre.target. Mar 17 18:49:48.184656 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 18:49:48.184687 systemd[1]: Reached target local-fs.target. Mar 17 18:49:48.184889 kernel: mlx5_core 323e:00:02.0 enP12862s1: Link up Mar 17 18:49:48.189685 systemd[1]: Reached target machines.target. Mar 17 18:49:48.196136 systemd[1]: Starting ldconfig.service... Mar 17 18:49:48.215133 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:49:48.215251 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:49:48.216017 kernel: hv_netvsc 002248c1-6df4-0022-48c1-6df4002248c1 eth0: Data path switched to VF: enP12862s1 Mar 17 18:49:48.216734 systemd[1]: Starting systemd-boot-update.service... Mar 17 18:49:48.218683 systemd-networkd[1280]: enP12862s1: Link UP Mar 17 18:49:48.219164 systemd-networkd[1280]: eth0: Link UP Mar 17 18:49:48.219234 systemd-networkd[1280]: eth0: Gained carrier Mar 17 18:49:48.223132 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Mar 17 18:49:48.230491 systemd-networkd[1280]: enP12862s1: Gained carrier Mar 17 18:49:48.230792 systemd[1]: Starting systemd-machine-id-commit.service... Mar 17 18:49:48.238064 systemd[1]: Starting systemd-sysext.service... Mar 17 18:49:48.251952 systemd-networkd[1280]: eth0: DHCPv4 address 10.200.20.12/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 17 18:49:48.254591 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1343 (bootctl) Mar 17 18:49:48.256164 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Mar 17 18:49:48.312335 systemd[1]: Unmounting usr-share-oem.mount... Mar 17 18:49:48.319815 systemd[1]: usr-share-oem.mount: Deactivated successfully. Mar 17 18:49:48.320109 systemd[1]: Unmounted usr-share-oem.mount. Mar 17 18:49:48.363674 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Mar 17 18:49:48.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:48.387890 kernel: loop0: detected capacity change from 0 to 194096 Mar 17 18:49:48.433590 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 18:49:48.435305 systemd[1]: Finished systemd-machine-id-commit.service. Mar 17 18:49:48.440882 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 18:49:48.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:48.473894 kernel: loop1: detected capacity change from 0 to 194096 Mar 17 18:49:48.478401 (sd-sysext)[1360]: Using extensions 'kubernetes'. Mar 17 18:49:48.479406 (sd-sysext)[1360]: Merged extensions into '/usr'. Mar 17 18:49:48.499921 systemd[1]: Mounting usr-share-oem.mount... Mar 17 18:49:48.505527 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:49:48.507212 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:49:48.513558 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:49:48.520098 systemd[1]: Starting modprobe@loop.service... Mar 17 18:49:48.531620 systemd-fsck[1355]: fsck.fat 4.2 (2021-01-31) Mar 17 18:49:48.531620 systemd-fsck[1355]: /dev/sda1: 236 files, 117179/258078 clusters Mar 17 18:49:48.533939 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:49:48.534970 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:49:48.540153 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Mar 17 18:49:48.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:48.550132 systemd[1]: Mounted usr-share-oem.mount. Mar 17 18:49:48.556654 systemd[1]: Finished systemd-sysext.service. Mar 17 18:49:48.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:48.562531 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:49:48.562712 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:49:48.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:48.566000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:48.568048 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:49:48.568204 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:49:48.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:48.572000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:48.574093 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:49:48.574310 systemd[1]: Finished modprobe@loop.service. Mar 17 18:49:48.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:48.578000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:48.583638 systemd[1]: Mounting boot.mount... Mar 17 18:49:48.588919 systemd[1]: Starting ensure-sysext.service... Mar 17 18:49:48.594698 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:49:48.594796 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:49:48.596317 systemd[1]: Starting systemd-tmpfiles-setup.service... Mar 17 18:49:48.605559 systemd[1]: Mounted boot.mount. Mar 17 18:49:48.611929 systemd[1]: Reloading. Mar 17 18:49:48.625236 systemd-tmpfiles[1380]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Mar 17 18:49:48.639500 systemd-tmpfiles[1380]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 18:49:48.654142 systemd-tmpfiles[1380]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 18:49:48.674796 /usr/lib/systemd/system-generators/torcx-generator[1402]: time="2025-03-17T18:49:48Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:49:48.675228 /usr/lib/systemd/system-generators/torcx-generator[1402]: time="2025-03-17T18:49:48Z" level=info msg="torcx already run" Mar 17 18:49:48.768008 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:49:48.768032 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:49:48.784005 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:49:48.860530 systemd[1]: Finished systemd-boot-update.service. Mar 17 18:49:48.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:48.875543 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:49:48.877464 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:49:48.883822 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:49:48.894128 systemd[1]: Starting modprobe@loop.service... Mar 17 18:49:48.899812 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:49:48.900020 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:49:48.900941 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:49:48.901161 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:49:48.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:48.906000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:48.907451 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:49:48.907633 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:49:48.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:48.914000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:48.915700 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:49:48.915928 systemd[1]: Finished modprobe@loop.service. Mar 17 18:49:48.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:48.920000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:48.923283 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:49:48.925104 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:49:48.932446 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:49:48.939199 systemd[1]: Starting modprobe@loop.service... Mar 17 18:49:48.943712 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:49:48.943890 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:49:48.945398 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:49:48.945597 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:49:48.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:48.950000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:48.952051 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:49:48.952239 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:49:48.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:48.958000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:48.960153 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:49:48.960362 systemd[1]: Finished modprobe@loop.service. Mar 17 18:49:48.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:48.965000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:48.969383 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:49:48.971135 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:49:48.982672 systemd[1]: Starting modprobe@drm.service... Mar 17 18:49:48.990450 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:49:48.997280 systemd[1]: Starting modprobe@loop.service... Mar 17 18:49:49.001830 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:49:49.002022 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:49:49.003054 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:49:49.003287 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:49:49.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:49.008000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:49.010666 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 18:49:49.011045 systemd[1]: Finished modprobe@drm.service. Mar 17 18:49:49.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:49.015000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:49.016476 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:49:49.016650 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:49:49.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:49.021000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:49.022856 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:49:49.023201 systemd[1]: Finished modprobe@loop.service. Mar 17 18:49:49.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:49.028000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:49.029568 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:49:49.029663 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:49:49.031193 systemd[1]: Finished ensure-sysext.service. Mar 17 18:49:49.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:49.225381 systemd[1]: Finished systemd-tmpfiles-setup.service. Mar 17 18:49:49.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:49.233980 systemd[1]: Starting audit-rules.service... Mar 17 18:49:49.239742 systemd[1]: Starting clean-ca-certificates.service... Mar 17 18:49:49.249295 systemd[1]: Starting systemd-journal-catalog-update.service... Mar 17 18:49:49.257123 systemd[1]: Starting systemd-resolved.service... Mar 17 18:49:49.264427 systemd[1]: Starting systemd-timesyncd.service... Mar 17 18:49:49.270912 systemd[1]: Starting systemd-update-utmp.service... Mar 17 18:49:49.276798 systemd[1]: Finished clean-ca-certificates.service. Mar 17 18:49:49.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:49.283049 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:49:49.309000 audit[1500]: SYSTEM_BOOT pid=1500 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Mar 17 18:49:49.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:49.312015 systemd[1]: Finished systemd-update-utmp.service. Mar 17 18:49:49.384691 systemd[1]: Started systemd-timesyncd.service. Mar 17 18:49:49.389000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:49.390452 systemd[1]: Reached target time-set.target. Mar 17 18:49:49.440636 systemd-resolved[1497]: Positive Trust Anchors: Mar 17 18:49:49.440656 systemd-resolved[1497]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 18:49:49.440707 systemd-resolved[1497]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Mar 17 18:49:49.485472 systemd[1]: Finished systemd-journal-catalog-update.service. Mar 17 18:49:49.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:49.498221 systemd-resolved[1497]: Using system hostname 'ci-3510.3.7-a-c36c8d7be6'. Mar 17 18:49:49.499964 systemd[1]: Started systemd-resolved.service. Mar 17 18:49:49.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:49.505564 systemd[1]: Reached target network.target. Mar 17 18:49:49.511045 systemd[1]: Reached target nss-lookup.target. Mar 17 18:49:49.584000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Mar 17 18:49:49.584000 audit[1517]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe6f06e30 a2=420 a3=0 items=0 ppid=1493 pid=1517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:49:49.584000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Mar 17 18:49:49.586075 augenrules[1517]: No rules Mar 17 18:49:49.587244 systemd[1]: Finished audit-rules.service. Mar 17 18:49:49.675540 systemd-timesyncd[1499]: Contacted time server 23.155.40.38:123 (0.flatcar.pool.ntp.org). Mar 17 18:49:49.675617 systemd-timesyncd[1499]: Initial clock synchronization to Mon 2025-03-17 18:49:49.675132 UTC. Mar 17 18:49:49.697972 systemd-networkd[1280]: eth0: Gained IPv6LL Mar 17 18:49:49.700082 systemd[1]: Finished systemd-networkd-wait-online.service. Mar 17 18:49:49.707923 systemd[1]: Reached target network-online.target. Mar 17 18:49:54.789283 ldconfig[1342]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 18:49:54.806294 systemd[1]: Finished ldconfig.service. Mar 17 18:49:54.813486 systemd[1]: Starting systemd-update-done.service... Mar 17 18:49:54.836033 systemd[1]: Finished systemd-update-done.service. Mar 17 18:49:54.842146 systemd[1]: Reached target sysinit.target. Mar 17 18:49:54.847680 systemd[1]: Started motdgen.path. Mar 17 18:49:54.853337 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Mar 17 18:49:54.861211 systemd[1]: Started logrotate.timer. Mar 17 18:49:54.866254 systemd[1]: Started mdadm.timer. Mar 17 18:49:54.870485 systemd[1]: Started systemd-tmpfiles-clean.timer. Mar 17 18:49:54.876286 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 18:49:54.876328 systemd[1]: Reached target paths.target. Mar 17 18:49:54.881765 systemd[1]: Reached target timers.target. Mar 17 18:49:54.887058 systemd[1]: Listening on dbus.socket. Mar 17 18:49:54.893212 systemd[1]: Starting docker.socket... Mar 17 18:49:54.899634 systemd[1]: Listening on sshd.socket. Mar 17 18:49:54.904696 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:49:54.905206 systemd[1]: Listening on docker.socket. Mar 17 18:49:54.910365 systemd[1]: Reached target sockets.target. Mar 17 18:49:54.915633 systemd[1]: Reached target basic.target. Mar 17 18:49:54.920518 systemd[1]: System is tainted: cgroupsv1 Mar 17 18:49:54.920574 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Mar 17 18:49:54.920600 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Mar 17 18:49:54.921973 systemd[1]: Starting containerd.service... Mar 17 18:49:54.927698 systemd[1]: Starting dbus.service... Mar 17 18:49:54.933023 systemd[1]: Starting enable-oem-cloudinit.service... Mar 17 18:49:54.939266 systemd[1]: Starting extend-filesystems.service... Mar 17 18:49:54.945116 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Mar 17 18:49:54.958040 systemd[1]: Starting kubelet.service... Mar 17 18:49:54.964013 systemd[1]: Starting motdgen.service... Mar 17 18:49:54.969309 systemd[1]: Started nvidia.service. Mar 17 18:49:54.975804 systemd[1]: Starting prepare-helm.service... Mar 17 18:49:54.981716 systemd[1]: Starting ssh-key-proc-cmdline.service... Mar 17 18:49:54.989098 systemd[1]: Starting sshd-keygen.service... Mar 17 18:49:54.996697 systemd[1]: Starting systemd-logind.service... Mar 17 18:49:55.018155 jq[1532]: false Mar 17 18:49:55.002248 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:49:55.002358 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 18:49:55.004337 systemd[1]: Starting update-engine.service... Mar 17 18:49:55.014247 systemd[1]: Starting update-ssh-keys-after-ignition.service... Mar 17 18:49:55.040529 jq[1551]: true Mar 17 18:49:55.029264 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 18:49:55.029614 systemd[1]: Finished ssh-key-proc-cmdline.service. Mar 17 18:49:55.040963 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 18:49:55.041336 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Mar 17 18:49:55.064609 extend-filesystems[1533]: Found loop1 Mar 17 18:49:55.089842 extend-filesystems[1533]: Found sda Mar 17 18:49:55.089842 extend-filesystems[1533]: Found sda1 Mar 17 18:49:55.089842 extend-filesystems[1533]: Found sda2 Mar 17 18:49:55.089842 extend-filesystems[1533]: Found sda3 Mar 17 18:49:55.089842 extend-filesystems[1533]: Found usr Mar 17 18:49:55.089842 extend-filesystems[1533]: Found sda4 Mar 17 18:49:55.089842 extend-filesystems[1533]: Found sda6 Mar 17 18:49:55.089842 extend-filesystems[1533]: Found sda7 Mar 17 18:49:55.089842 extend-filesystems[1533]: Found sda9 Mar 17 18:49:55.089842 extend-filesystems[1533]: Checking size of /dev/sda9 Mar 17 18:49:55.297817 jq[1565]: true Mar 17 18:49:55.075703 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 18:49:55.298078 env[1566]: time="2025-03-17T18:49:55.151888855Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Mar 17 18:49:55.298078 env[1566]: time="2025-03-17T18:49:55.293701655Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 18:49:55.298078 env[1566]: time="2025-03-17T18:49:55.293902613Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:49:55.298334 tar[1556]: linux-arm64/helm Mar 17 18:49:55.298485 extend-filesystems[1533]: Old size kept for /dev/sda9 Mar 17 18:49:55.298485 extend-filesystems[1533]: Found sr0 Mar 17 18:49:55.076030 systemd[1]: Finished motdgen.service. Mar 17 18:49:55.337488 env[1566]: time="2025-03-17T18:49:55.299142391Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.179-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:49:55.337488 env[1566]: time="2025-03-17T18:49:55.299229510Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:49:55.337488 env[1566]: time="2025-03-17T18:49:55.299528666Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:49:55.337488 env[1566]: time="2025-03-17T18:49:55.299549306Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 18:49:55.337488 env[1566]: time="2025-03-17T18:49:55.299562986Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Mar 17 18:49:55.337488 env[1566]: time="2025-03-17T18:49:55.299572385Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 18:49:55.337488 env[1566]: time="2025-03-17T18:49:55.299643425Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:49:55.337488 env[1566]: time="2025-03-17T18:49:55.299821183Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:49:55.337488 env[1566]: time="2025-03-17T18:49:55.300013420Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:49:55.337488 env[1566]: time="2025-03-17T18:49:55.300031900Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 18:49:55.161790 systemd-logind[1549]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Mar 17 18:49:55.338068 env[1566]: time="2025-03-17T18:49:55.300092739Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Mar 17 18:49:55.338068 env[1566]: time="2025-03-17T18:49:55.300104539Z" level=info msg="metadata content store policy set" policy=shared Mar 17 18:49:55.338068 env[1566]: time="2025-03-17T18:49:55.325834034Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 18:49:55.338068 env[1566]: time="2025-03-17T18:49:55.325908273Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 18:49:55.338068 env[1566]: time="2025-03-17T18:49:55.325926953Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 18:49:55.338068 env[1566]: time="2025-03-17T18:49:55.325993872Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 18:49:55.338068 env[1566]: time="2025-03-17T18:49:55.326010552Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 18:49:55.338068 env[1566]: time="2025-03-17T18:49:55.326025632Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 18:49:55.338068 env[1566]: time="2025-03-17T18:49:55.326049592Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 18:49:55.338068 env[1566]: time="2025-03-17T18:49:55.326535706Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 18:49:55.338068 env[1566]: time="2025-03-17T18:49:55.326559746Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Mar 17 18:49:55.338068 env[1566]: time="2025-03-17T18:49:55.326586945Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 18:49:55.338068 env[1566]: time="2025-03-17T18:49:55.326602745Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 18:49:55.338068 env[1566]: time="2025-03-17T18:49:55.326618425Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 18:49:55.338396 bash[1591]: Updated "/home/core/.ssh/authorized_keys" Mar 17 18:49:55.167066 systemd-logind[1549]: New seat seat0. Mar 17 18:49:55.338524 env[1566]: time="2025-03-17T18:49:55.326884902Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 18:49:55.338524 env[1566]: time="2025-03-17T18:49:55.326999980Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 18:49:55.338524 env[1566]: time="2025-03-17T18:49:55.327388416Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 18:49:55.338524 env[1566]: time="2025-03-17T18:49:55.327430375Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 18:49:55.338524 env[1566]: time="2025-03-17T18:49:55.327446775Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 18:49:55.338524 env[1566]: time="2025-03-17T18:49:55.327514494Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 18:49:55.338524 env[1566]: time="2025-03-17T18:49:55.327529294Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 18:49:55.338524 env[1566]: time="2025-03-17T18:49:55.327542654Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 18:49:55.338524 env[1566]: time="2025-03-17T18:49:55.327565494Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 18:49:55.338524 env[1566]: time="2025-03-17T18:49:55.327580174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 18:49:55.338524 env[1566]: time="2025-03-17T18:49:55.327594253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 18:49:55.338524 env[1566]: time="2025-03-17T18:49:55.327606733Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 18:49:55.338524 env[1566]: time="2025-03-17T18:49:55.327755292Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 18:49:55.338524 env[1566]: time="2025-03-17T18:49:55.327776331Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 18:49:55.338524 env[1566]: time="2025-03-17T18:49:55.327960489Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 18:49:55.178979 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 18:49:55.338946 env[1566]: time="2025-03-17T18:49:55.327980329Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 18:49:55.338946 env[1566]: time="2025-03-17T18:49:55.327993769Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 18:49:55.338946 env[1566]: time="2025-03-17T18:49:55.328009489Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 18:49:55.338946 env[1566]: time="2025-03-17T18:49:55.328025888Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Mar 17 18:49:55.338946 env[1566]: time="2025-03-17T18:49:55.328050608Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 18:49:55.338946 env[1566]: time="2025-03-17T18:49:55.328069848Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Mar 17 18:49:55.338946 env[1566]: time="2025-03-17T18:49:55.328114407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 18:49:55.179314 systemd[1]: Finished extend-filesystems.service. Mar 17 18:49:55.264209 systemd[1]: Finished update-ssh-keys-after-ignition.service. Mar 17 18:49:55.339189 env[1566]: time="2025-03-17T18:49:55.328349964Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 18:49:55.339189 env[1566]: time="2025-03-17T18:49:55.328413044Z" level=info msg="Connect containerd service" Mar 17 18:49:55.339189 env[1566]: time="2025-03-17T18:49:55.328461763Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 18:49:55.339189 env[1566]: time="2025-03-17T18:49:55.329218194Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 18:49:55.339189 env[1566]: time="2025-03-17T18:49:55.329291873Z" level=info msg="Start subscribing containerd event" Mar 17 18:49:55.339189 env[1566]: time="2025-03-17T18:49:55.329331593Z" level=info msg="Start recovering state" Mar 17 18:49:55.339189 env[1566]: time="2025-03-17T18:49:55.329404832Z" level=info msg="Start event monitor" Mar 17 18:49:55.339189 env[1566]: time="2025-03-17T18:49:55.329422032Z" level=info msg="Start snapshots syncer" Mar 17 18:49:55.339189 env[1566]: time="2025-03-17T18:49:55.329431672Z" level=info msg="Start cni network conf syncer for default" Mar 17 18:49:55.339189 env[1566]: time="2025-03-17T18:49:55.329440152Z" level=info msg="Start streaming server" Mar 17 18:49:55.339189 env[1566]: time="2025-03-17T18:49:55.329885946Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 18:49:55.339189 env[1566]: time="2025-03-17T18:49:55.329932506Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 18:49:55.330174 systemd[1]: Started containerd.service. Mar 17 18:49:55.357517 env[1566]: time="2025-03-17T18:49:55.351050576Z" level=info msg="containerd successfully booted in 0.201719s" Mar 17 18:49:55.352803 systemd[1]: nvidia.service: Deactivated successfully. Mar 17 18:49:55.362465 dbus-daemon[1531]: [system] SELinux support is enabled Mar 17 18:49:55.362696 systemd[1]: Started dbus.service. Mar 17 18:49:55.369000 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 18:49:55.369042 systemd[1]: Reached target system-config.target. Mar 17 18:49:55.377310 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 18:49:55.377340 systemd[1]: Reached target user-config.target. Mar 17 18:49:55.385029 dbus-daemon[1531]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 17 18:49:55.385031 systemd[1]: Started systemd-logind.service. Mar 17 18:49:55.716331 update_engine[1550]: I0317 18:49:55.703722 1550 main.cc:92] Flatcar Update Engine starting Mar 17 18:49:55.761016 systemd[1]: Started update-engine.service. Mar 17 18:49:55.766226 update_engine[1550]: I0317 18:49:55.766079 1550 update_check_scheduler.cc:74] Next update check in 5m54s Mar 17 18:49:55.770320 systemd[1]: Started locksmithd.service. Mar 17 18:49:55.804176 tar[1556]: linux-arm64/LICENSE Mar 17 18:49:55.804305 tar[1556]: linux-arm64/README.md Mar 17 18:49:55.813742 systemd[1]: Finished prepare-helm.service. Mar 17 18:49:56.097630 systemd[1]: Started kubelet.service. Mar 17 18:49:56.617787 kubelet[1653]: E0317 18:49:56.617737 1653 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:49:56.620055 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:49:56.620202 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:49:56.817424 sshd_keygen[1553]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 18:49:56.836629 systemd[1]: Finished sshd-keygen.service. Mar 17 18:49:56.843605 systemd[1]: Starting issuegen.service... Mar 17 18:49:56.848852 systemd[1]: Started waagent.service. Mar 17 18:49:56.856199 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 18:49:56.856730 systemd[1]: Finished issuegen.service. Mar 17 18:49:56.864051 systemd[1]: Starting systemd-user-sessions.service... Mar 17 18:49:56.897237 systemd[1]: Finished systemd-user-sessions.service. Mar 17 18:49:56.913228 systemd[1]: Started getty@tty1.service. Mar 17 18:49:56.919610 systemd[1]: Started serial-getty@ttyAMA0.service. Mar 17 18:49:56.925416 systemd[1]: Reached target getty.target. Mar 17 18:49:56.931791 systemd[1]: Reached target multi-user.target. Mar 17 18:49:56.938460 systemd[1]: Starting systemd-update-utmp-runlevel.service... Mar 17 18:49:56.947509 locksmithd[1644]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 18:49:56.948758 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Mar 17 18:49:56.949048 systemd[1]: Finished systemd-update-utmp-runlevel.service. Mar 17 18:49:56.962084 systemd[1]: Startup finished in 15.022s (kernel) + 21.968s (userspace) = 36.991s. Mar 17 18:49:57.503998 login[1682]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Mar 17 18:49:57.504497 login[1681]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 17 18:49:57.556444 systemd[1]: Created slice user-500.slice. Mar 17 18:49:57.557536 systemd[1]: Starting user-runtime-dir@500.service... Mar 17 18:49:57.560522 systemd-logind[1549]: New session 1 of user core. Mar 17 18:49:57.581543 systemd[1]: Finished user-runtime-dir@500.service. Mar 17 18:49:57.582969 systemd[1]: Starting user@500.service... Mar 17 18:49:57.612949 (systemd)[1688]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:49:57.771798 systemd[1688]: Queued start job for default target default.target. Mar 17 18:49:57.772079 systemd[1688]: Reached target paths.target. Mar 17 18:49:57.772096 systemd[1688]: Reached target sockets.target. Mar 17 18:49:57.772107 systemd[1688]: Reached target timers.target. Mar 17 18:49:57.772117 systemd[1688]: Reached target basic.target. Mar 17 18:49:57.772170 systemd[1688]: Reached target default.target. Mar 17 18:49:57.772192 systemd[1688]: Startup finished in 152ms. Mar 17 18:49:57.772254 systemd[1]: Started user@500.service. Mar 17 18:49:57.773232 systemd[1]: Started session-1.scope. Mar 17 18:49:58.505815 login[1682]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 17 18:49:58.509814 systemd-logind[1549]: New session 2 of user core. Mar 17 18:49:58.510359 systemd[1]: Started session-2.scope. Mar 17 18:50:02.816551 waagent[1675]: 2025-03-17T18:50:02.816415Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Mar 17 18:50:02.856319 waagent[1675]: 2025-03-17T18:50:02.856209Z INFO Daemon Daemon OS: flatcar 3510.3.7 Mar 17 18:50:02.863277 waagent[1675]: 2025-03-17T18:50:02.863184Z INFO Daemon Daemon Python: 3.9.16 Mar 17 18:50:02.871126 waagent[1675]: 2025-03-17T18:50:02.871010Z INFO Daemon Daemon Run daemon Mar 17 18:50:02.877139 waagent[1675]: 2025-03-17T18:50:02.877049Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.7' Mar 17 18:50:02.898815 waagent[1675]: 2025-03-17T18:50:02.898632Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Mar 17 18:50:02.918896 waagent[1675]: 2025-03-17T18:50:02.918685Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Mar 17 18:50:02.930813 waagent[1675]: 2025-03-17T18:50:02.930713Z INFO Daemon Daemon cloud-init is enabled: False Mar 17 18:50:02.937323 waagent[1675]: 2025-03-17T18:50:02.937222Z INFO Daemon Daemon Using waagent for provisioning Mar 17 18:50:02.944809 waagent[1675]: 2025-03-17T18:50:02.944723Z INFO Daemon Daemon Activate resource disk Mar 17 18:50:02.951039 waagent[1675]: 2025-03-17T18:50:02.950945Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Mar 17 18:50:02.969153 waagent[1675]: 2025-03-17T18:50:02.969049Z INFO Daemon Daemon Found device: None Mar 17 18:50:02.975039 waagent[1675]: 2025-03-17T18:50:02.974948Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Mar 17 18:50:02.985301 waagent[1675]: 2025-03-17T18:50:02.985198Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Mar 17 18:50:02.999669 waagent[1675]: 2025-03-17T18:50:02.999582Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 17 18:50:03.008448 waagent[1675]: 2025-03-17T18:50:03.008357Z INFO Daemon Daemon Running default provisioning handler Mar 17 18:50:03.024520 waagent[1675]: 2025-03-17T18:50:03.024347Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Mar 17 18:50:03.043152 waagent[1675]: 2025-03-17T18:50:03.042973Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Mar 17 18:50:03.056806 waagent[1675]: 2025-03-17T18:50:03.056651Z INFO Daemon Daemon cloud-init is enabled: False Mar 17 18:50:03.064466 waagent[1675]: 2025-03-17T18:50:03.064363Z INFO Daemon Daemon Copying ovf-env.xml Mar 17 18:50:03.170588 waagent[1675]: 2025-03-17T18:50:03.170364Z INFO Daemon Daemon Successfully mounted dvd Mar 17 18:50:03.298499 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Mar 17 18:50:03.344959 waagent[1675]: 2025-03-17T18:50:03.344779Z INFO Daemon Daemon Detect protocol endpoint Mar 17 18:50:03.351076 waagent[1675]: 2025-03-17T18:50:03.350958Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 17 18:50:03.357688 waagent[1675]: 2025-03-17T18:50:03.357584Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Mar 17 18:50:03.365509 waagent[1675]: 2025-03-17T18:50:03.365397Z INFO Daemon Daemon Test for route to 168.63.129.16 Mar 17 18:50:03.373284 waagent[1675]: 2025-03-17T18:50:03.373180Z INFO Daemon Daemon Route to 168.63.129.16 exists Mar 17 18:50:03.379795 waagent[1675]: 2025-03-17T18:50:03.379693Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Mar 17 18:50:03.555051 waagent[1675]: 2025-03-17T18:50:03.554968Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Mar 17 18:50:03.564211 waagent[1675]: 2025-03-17T18:50:03.564153Z INFO Daemon Daemon Wire protocol version:2012-11-30 Mar 17 18:50:03.571472 waagent[1675]: 2025-03-17T18:50:03.571361Z INFO Daemon Daemon Server preferred version:2015-04-05 Mar 17 18:50:04.217109 waagent[1675]: 2025-03-17T18:50:04.216941Z INFO Daemon Daemon Initializing goal state during protocol detection Mar 17 18:50:04.234366 waagent[1675]: 2025-03-17T18:50:04.234254Z INFO Daemon Daemon Forcing an update of the goal state.. Mar 17 18:50:04.240585 waagent[1675]: 2025-03-17T18:50:04.240485Z INFO Daemon Daemon Fetching goal state [incarnation 1] Mar 17 18:50:04.340116 waagent[1675]: 2025-03-17T18:50:04.339969Z INFO Daemon Daemon Found private key matching thumbprint D5D03C8EC0519BD58507CCA8C3BC2E70F44A956F Mar 17 18:50:04.350742 waagent[1675]: 2025-03-17T18:50:04.350639Z INFO Daemon Daemon Certificate with thumbprint 16252F8E366E7F65C5140DAF5CA3D0E18F269B2E has no matching private key. Mar 17 18:50:04.363539 waagent[1675]: 2025-03-17T18:50:04.363429Z INFO Daemon Daemon Fetch goal state completed Mar 17 18:50:04.419411 waagent[1675]: 2025-03-17T18:50:04.419348Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: d259bcf7-d5cc-4d70-847a-c731290f54e2 New eTag: 14959341427431667907] Mar 17 18:50:04.431579 waagent[1675]: 2025-03-17T18:50:04.431484Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Mar 17 18:50:04.450523 waagent[1675]: 2025-03-17T18:50:04.450435Z INFO Daemon Daemon Starting provisioning Mar 17 18:50:04.458117 waagent[1675]: 2025-03-17T18:50:04.457983Z INFO Daemon Daemon Handle ovf-env.xml. Mar 17 18:50:04.464486 waagent[1675]: 2025-03-17T18:50:04.464387Z INFO Daemon Daemon Set hostname [ci-3510.3.7-a-c36c8d7be6] Mar 17 18:50:04.523688 waagent[1675]: 2025-03-17T18:50:04.523543Z INFO Daemon Daemon Publish hostname [ci-3510.3.7-a-c36c8d7be6] Mar 17 18:50:04.531815 waagent[1675]: 2025-03-17T18:50:04.531713Z INFO Daemon Daemon Examine /proc/net/route for primary interface Mar 17 18:50:04.539350 waagent[1675]: 2025-03-17T18:50:04.539256Z INFO Daemon Daemon Primary interface is [eth0] Mar 17 18:50:04.557269 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Mar 17 18:50:04.557518 systemd[1]: Stopped systemd-networkd-wait-online.service. Mar 17 18:50:04.557582 systemd[1]: Stopping systemd-networkd-wait-online.service... Mar 17 18:50:04.557791 systemd[1]: Stopping systemd-networkd.service... Mar 17 18:50:04.563922 systemd-networkd[1280]: eth0: DHCPv6 lease lost Mar 17 18:50:04.565448 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 18:50:04.565725 systemd[1]: Stopped systemd-networkd.service. Mar 17 18:50:04.568078 systemd[1]: Starting systemd-networkd.service... Mar 17 18:50:04.603086 systemd-networkd[1734]: enP12862s1: Link UP Mar 17 18:50:04.603097 systemd-networkd[1734]: enP12862s1: Gained carrier Mar 17 18:50:04.604042 systemd-networkd[1734]: eth0: Link UP Mar 17 18:50:04.604051 systemd-networkd[1734]: eth0: Gained carrier Mar 17 18:50:04.604376 systemd-networkd[1734]: lo: Link UP Mar 17 18:50:04.604384 systemd-networkd[1734]: lo: Gained carrier Mar 17 18:50:04.604623 systemd-networkd[1734]: eth0: Gained IPv6LL Mar 17 18:50:04.604849 systemd-networkd[1734]: Enumeration completed Mar 17 18:50:04.605124 systemd[1]: Started systemd-networkd.service. Mar 17 18:50:04.606979 waagent[1675]: 2025-03-17T18:50:04.606784Z INFO Daemon Daemon Create user account if not exists Mar 17 18:50:04.607225 systemd[1]: Starting systemd-networkd-wait-online.service... Mar 17 18:50:04.616665 systemd-networkd[1734]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 18:50:04.618174 waagent[1675]: 2025-03-17T18:50:04.618063Z INFO Daemon Daemon User core already exists, skip useradd Mar 17 18:50:04.625859 waagent[1675]: 2025-03-17T18:50:04.625750Z INFO Daemon Daemon Configure sudoer Mar 17 18:50:04.634424 waagent[1675]: 2025-03-17T18:50:04.634323Z INFO Daemon Daemon Configure sshd Mar 17 18:50:04.640032 waagent[1675]: 2025-03-17T18:50:04.639930Z INFO Daemon Daemon Deploy ssh public key. Mar 17 18:50:04.640944 systemd-networkd[1734]: eth0: DHCPv4 address 10.200.20.12/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 17 18:50:04.653064 systemd[1]: Finished systemd-networkd-wait-online.service. Mar 17 18:50:05.837017 waagent[1675]: 2025-03-17T18:50:05.836946Z INFO Daemon Daemon Provisioning complete Mar 17 18:50:05.866300 waagent[1675]: 2025-03-17T18:50:05.866222Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Mar 17 18:50:05.873858 waagent[1675]: 2025-03-17T18:50:05.873749Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Mar 17 18:50:05.886612 waagent[1675]: 2025-03-17T18:50:05.886507Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Mar 17 18:50:06.219233 waagent[1744]: 2025-03-17T18:50:06.219096Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Mar 17 18:50:06.220094 waagent[1744]: 2025-03-17T18:50:06.220028Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 17 18:50:06.220235 waagent[1744]: 2025-03-17T18:50:06.220189Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 17 18:50:06.233895 waagent[1744]: 2025-03-17T18:50:06.233721Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Mar 17 18:50:06.234104 waagent[1744]: 2025-03-17T18:50:06.234051Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Mar 17 18:50:06.324259 waagent[1744]: 2025-03-17T18:50:06.324091Z INFO ExtHandler ExtHandler Found private key matching thumbprint D5D03C8EC0519BD58507CCA8C3BC2E70F44A956F Mar 17 18:50:06.324518 waagent[1744]: 2025-03-17T18:50:06.324460Z INFO ExtHandler ExtHandler Certificate with thumbprint 16252F8E366E7F65C5140DAF5CA3D0E18F269B2E has no matching private key. Mar 17 18:50:06.324762 waagent[1744]: 2025-03-17T18:50:06.324712Z INFO ExtHandler ExtHandler Fetch goal state completed Mar 17 18:50:06.342744 waagent[1744]: 2025-03-17T18:50:06.342669Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 3412a292-c114-4124-bcb9-f866f6cae777 New eTag: 14959341427431667907] Mar 17 18:50:06.343455 waagent[1744]: 2025-03-17T18:50:06.343387Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Mar 17 18:50:06.402450 waagent[1744]: 2025-03-17T18:50:06.402283Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.7; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Mar 17 18:50:06.443600 waagent[1744]: 2025-03-17T18:50:06.443503Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1744 Mar 17 18:50:06.447852 waagent[1744]: 2025-03-17T18:50:06.447752Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.7', '', 'Flatcar Container Linux by Kinvolk'] Mar 17 18:50:06.449470 waagent[1744]: 2025-03-17T18:50:06.449383Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Mar 17 18:50:06.553017 waagent[1744]: 2025-03-17T18:50:06.552881Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Mar 17 18:50:06.553657 waagent[1744]: 2025-03-17T18:50:06.553592Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Mar 17 18:50:06.563505 waagent[1744]: 2025-03-17T18:50:06.563442Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Mar 17 18:50:06.564323 waagent[1744]: 2025-03-17T18:50:06.564253Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Mar 17 18:50:06.565764 waagent[1744]: 2025-03-17T18:50:06.565689Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Mar 17 18:50:06.567444 waagent[1744]: 2025-03-17T18:50:06.567369Z INFO ExtHandler ExtHandler Starting env monitor service. Mar 17 18:50:06.567739 waagent[1744]: 2025-03-17T18:50:06.567663Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 17 18:50:06.568370 waagent[1744]: 2025-03-17T18:50:06.568304Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 17 18:50:06.569050 waagent[1744]: 2025-03-17T18:50:06.568979Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Mar 17 18:50:06.569582 waagent[1744]: 2025-03-17T18:50:06.569499Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Mar 17 18:50:06.570320 waagent[1744]: 2025-03-17T18:50:06.570231Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Mar 17 18:50:06.570320 waagent[1744]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Mar 17 18:50:06.570320 waagent[1744]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Mar 17 18:50:06.570320 waagent[1744]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Mar 17 18:50:06.570320 waagent[1744]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Mar 17 18:50:06.570320 waagent[1744]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 17 18:50:06.570320 waagent[1744]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 17 18:50:06.570734 waagent[1744]: 2025-03-17T18:50:06.570646Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 17 18:50:06.573798 waagent[1744]: 2025-03-17T18:50:06.573595Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Mar 17 18:50:06.574185 waagent[1744]: 2025-03-17T18:50:06.574097Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 17 18:50:06.574694 waagent[1744]: 2025-03-17T18:50:06.574598Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Mar 17 18:50:06.575662 waagent[1744]: 2025-03-17T18:50:06.575556Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Mar 17 18:50:06.576191 waagent[1744]: 2025-03-17T18:50:06.576104Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Mar 17 18:50:06.577227 waagent[1744]: 2025-03-17T18:50:06.577131Z INFO EnvHandler ExtHandler Configure routes Mar 17 18:50:06.577385 waagent[1744]: 2025-03-17T18:50:06.577313Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Mar 17 18:50:06.579371 waagent[1744]: 2025-03-17T18:50:06.579270Z INFO EnvHandler ExtHandler Gateway:None Mar 17 18:50:06.582097 waagent[1744]: 2025-03-17T18:50:06.582007Z INFO EnvHandler ExtHandler Routes:None Mar 17 18:50:06.593457 waagent[1744]: 2025-03-17T18:50:06.593380Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Mar 17 18:50:06.594383 waagent[1744]: 2025-03-17T18:50:06.594327Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Mar 17 18:50:06.595599 waagent[1744]: 2025-03-17T18:50:06.595528Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Mar 17 18:50:06.623432 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 18:50:06.623593 systemd[1]: Stopped kubelet.service. Mar 17 18:50:06.625230 systemd[1]: Starting kubelet.service... Mar 17 18:50:06.628011 waagent[1744]: 2025-03-17T18:50:06.627941Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Mar 17 18:50:06.637760 waagent[1744]: 2025-03-17T18:50:06.637652Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1734' Mar 17 18:50:06.728006 systemd[1]: Started kubelet.service. Mar 17 18:50:06.809481 kubelet[1776]: E0317 18:50:06.809330 1776 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:50:06.812243 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:50:06.812393 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:50:07.180729 waagent[1744]: 2025-03-17T18:50:07.180571Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.12.0.2 -- exiting Mar 17 18:50:07.891855 waagent[1675]: 2025-03-17T18:50:07.891687Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Mar 17 18:50:07.897159 waagent[1675]: 2025-03-17T18:50:07.897089Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.12.0.2 to be the latest agent Mar 17 18:50:09.275206 waagent[1788]: 2025-03-17T18:50:09.275094Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.2) Mar 17 18:50:09.276409 waagent[1788]: 2025-03-17T18:50:09.276340Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.7 Mar 17 18:50:09.276652 waagent[1788]: 2025-03-17T18:50:09.276604Z INFO ExtHandler ExtHandler Python: 3.9.16 Mar 17 18:50:09.276892 waagent[1788]: 2025-03-17T18:50:09.276817Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Mar 17 18:50:09.286159 waagent[1788]: 2025-03-17T18:50:09.286018Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.7; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Mar 17 18:50:09.286805 waagent[1788]: 2025-03-17T18:50:09.286744Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 17 18:50:09.287117 waagent[1788]: 2025-03-17T18:50:09.287065Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 17 18:50:09.302110 waagent[1788]: 2025-03-17T18:50:09.302005Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 17 18:50:09.317018 waagent[1788]: 2025-03-17T18:50:09.316938Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.164 Mar 17 18:50:09.318382 waagent[1788]: 2025-03-17T18:50:09.318313Z INFO ExtHandler Mar 17 18:50:09.318662 waagent[1788]: 2025-03-17T18:50:09.318612Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: ad2d15d3-e208-408f-8d38-02e6c33384b0 eTag: 14959341427431667907 source: Fabric] Mar 17 18:50:09.319603 waagent[1788]: 2025-03-17T18:50:09.319542Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Mar 17 18:50:09.321053 waagent[1788]: 2025-03-17T18:50:09.320989Z INFO ExtHandler Mar 17 18:50:09.321297 waagent[1788]: 2025-03-17T18:50:09.321248Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Mar 17 18:50:09.329687 waagent[1788]: 2025-03-17T18:50:09.329619Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Mar 17 18:50:09.330494 waagent[1788]: 2025-03-17T18:50:09.330440Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Mar 17 18:50:09.355685 waagent[1788]: 2025-03-17T18:50:09.355613Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Mar 17 18:50:09.444501 waagent[1788]: 2025-03-17T18:50:09.444347Z INFO ExtHandler Downloaded certificate {'thumbprint': 'D5D03C8EC0519BD58507CCA8C3BC2E70F44A956F', 'hasPrivateKey': True} Mar 17 18:50:09.445939 waagent[1788]: 2025-03-17T18:50:09.445836Z INFO ExtHandler Downloaded certificate {'thumbprint': '16252F8E366E7F65C5140DAF5CA3D0E18F269B2E', 'hasPrivateKey': False} Mar 17 18:50:09.447347 waagent[1788]: 2025-03-17T18:50:09.447265Z INFO ExtHandler Fetch goal state completed Mar 17 18:50:09.472131 waagent[1788]: 2025-03-17T18:50:09.471995Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.0.15 3 Sep 2024 (Library: OpenSSL 3.0.15 3 Sep 2024) Mar 17 18:50:09.486671 waagent[1788]: 2025-03-17T18:50:09.486553Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.2 running as process 1788 Mar 17 18:50:09.490459 waagent[1788]: 2025-03-17T18:50:09.490335Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '3510.3.7', '', 'Flatcar Container Linux by Kinvolk'] Mar 17 18:50:09.491926 waagent[1788]: 2025-03-17T18:50:09.491832Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '3510.3.7', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Mar 17 18:50:09.492422 waagent[1788]: 2025-03-17T18:50:09.492361Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Mar 17 18:50:09.494831 waagent[1788]: 2025-03-17T18:50:09.494754Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Mar 17 18:50:09.501232 waagent[1788]: 2025-03-17T18:50:09.501164Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Mar 17 18:50:09.501941 waagent[1788]: 2025-03-17T18:50:09.501829Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Mar 17 18:50:09.512070 waagent[1788]: 2025-03-17T18:50:09.512004Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Mar 17 18:50:09.512846 waagent[1788]: 2025-03-17T18:50:09.512781Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Mar 17 18:50:09.520773 waagent[1788]: 2025-03-17T18:50:09.520635Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Mar 17 18:50:09.522296 waagent[1788]: 2025-03-17T18:50:09.522196Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Mar 17 18:50:09.524427 waagent[1788]: 2025-03-17T18:50:09.524329Z INFO ExtHandler ExtHandler Starting env monitor service. Mar 17 18:50:09.524679 waagent[1788]: 2025-03-17T18:50:09.524599Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 17 18:50:09.525458 waagent[1788]: 2025-03-17T18:50:09.525345Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 17 18:50:09.526161 waagent[1788]: 2025-03-17T18:50:09.526080Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Mar 17 18:50:09.526538 waagent[1788]: 2025-03-17T18:50:09.526468Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Mar 17 18:50:09.526538 waagent[1788]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Mar 17 18:50:09.526538 waagent[1788]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Mar 17 18:50:09.526538 waagent[1788]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Mar 17 18:50:09.526538 waagent[1788]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Mar 17 18:50:09.526538 waagent[1788]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 17 18:50:09.526538 waagent[1788]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 17 18:50:09.529444 waagent[1788]: 2025-03-17T18:50:09.529287Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Mar 17 18:50:09.530211 waagent[1788]: 2025-03-17T18:50:09.530119Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 17 18:50:09.530783 waagent[1788]: 2025-03-17T18:50:09.530712Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 17 18:50:09.531916 waagent[1788]: 2025-03-17T18:50:09.531283Z INFO EnvHandler ExtHandler Configure routes Mar 17 18:50:09.534144 waagent[1788]: 2025-03-17T18:50:09.533950Z INFO EnvHandler ExtHandler Gateway:None Mar 17 18:50:09.534450 waagent[1788]: 2025-03-17T18:50:09.534348Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Mar 17 18:50:09.534948 waagent[1788]: 2025-03-17T18:50:09.534839Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Mar 17 18:50:09.536042 waagent[1788]: 2025-03-17T18:50:09.535965Z INFO EnvHandler ExtHandler Routes:None Mar 17 18:50:09.537094 waagent[1788]: 2025-03-17T18:50:09.536986Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Mar 17 18:50:09.537370 waagent[1788]: 2025-03-17T18:50:09.537297Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Mar 17 18:50:09.542339 waagent[1788]: 2025-03-17T18:50:09.542237Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Mar 17 18:50:09.552383 waagent[1788]: 2025-03-17T18:50:09.552277Z INFO MonitorHandler ExtHandler Network interfaces: Mar 17 18:50:09.552383 waagent[1788]: Executing ['ip', '-a', '-o', 'link']: Mar 17 18:50:09.552383 waagent[1788]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Mar 17 18:50:09.552383 waagent[1788]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:c1:6d:f4 brd ff:ff:ff:ff:ff:ff Mar 17 18:50:09.552383 waagent[1788]: 3: enP12862s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:c1:6d:f4 brd ff:ff:ff:ff:ff:ff\ altname enP12862p0s2 Mar 17 18:50:09.552383 waagent[1788]: Executing ['ip', '-4', '-a', '-o', 'address']: Mar 17 18:50:09.552383 waagent[1788]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Mar 17 18:50:09.552383 waagent[1788]: 2: eth0 inet 10.200.20.12/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Mar 17 18:50:09.552383 waagent[1788]: Executing ['ip', '-6', '-a', '-o', 'address']: Mar 17 18:50:09.552383 waagent[1788]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Mar 17 18:50:09.552383 waagent[1788]: 2: eth0 inet6 fe80::222:48ff:fec1:6df4/64 scope link \ valid_lft forever preferred_lft forever Mar 17 18:50:09.567670 waagent[1788]: 2025-03-17T18:50:09.567558Z INFO ExtHandler ExtHandler Downloading agent manifest Mar 17 18:50:09.585526 waagent[1788]: 2025-03-17T18:50:09.585416Z INFO ExtHandler ExtHandler Mar 17 18:50:09.586242 waagent[1788]: 2025-03-17T18:50:09.586168Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 3ecf3ef0-1dc9-425c-961f-ebb4ac370086 correlation 48065f79-1003-43b5-9b72-51b0088b45a5 created: 2025-03-17T18:48:35.347740Z] Mar 17 18:50:09.587288 waagent[1788]: 2025-03-17T18:50:09.587205Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Mar 17 18:50:09.589643 waagent[1788]: 2025-03-17T18:50:09.589556Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 4 ms] Mar 17 18:50:09.619450 waagent[1788]: 2025-03-17T18:50:09.619342Z INFO ExtHandler ExtHandler Looking for existing remote access users. Mar 17 18:50:09.639574 waagent[1788]: 2025-03-17T18:50:09.639480Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.2 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 7AAB91EE-BD9A-4DB2-8DBE-12AB4D632751;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1;UpdateMode: SelfUpdate;] Mar 17 18:50:09.811029 waagent[1788]: 2025-03-17T18:50:09.810767Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Mar 17 18:50:09.811029 waagent[1788]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 17 18:50:09.811029 waagent[1788]: pkts bytes target prot opt in out source destination Mar 17 18:50:09.811029 waagent[1788]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 17 18:50:09.811029 waagent[1788]: pkts bytes target prot opt in out source destination Mar 17 18:50:09.811029 waagent[1788]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 17 18:50:09.811029 waagent[1788]: pkts bytes target prot opt in out source destination Mar 17 18:50:09.811029 waagent[1788]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 17 18:50:09.811029 waagent[1788]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 17 18:50:09.811029 waagent[1788]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 17 18:50:09.820903 waagent[1788]: 2025-03-17T18:50:09.820720Z INFO EnvHandler ExtHandler Current Firewall rules: Mar 17 18:50:09.820903 waagent[1788]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 17 18:50:09.820903 waagent[1788]: pkts bytes target prot opt in out source destination Mar 17 18:50:09.820903 waagent[1788]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 17 18:50:09.820903 waagent[1788]: pkts bytes target prot opt in out source destination Mar 17 18:50:09.820903 waagent[1788]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 17 18:50:09.820903 waagent[1788]: pkts bytes target prot opt in out source destination Mar 17 18:50:09.820903 waagent[1788]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 17 18:50:09.820903 waagent[1788]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 17 18:50:09.820903 waagent[1788]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 17 18:50:09.821563 waagent[1788]: 2025-03-17T18:50:09.821503Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Mar 17 18:50:16.956220 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 17 18:50:16.956389 systemd[1]: Stopped kubelet.service. Mar 17 18:50:16.957925 systemd[1]: Starting kubelet.service... Mar 17 18:50:17.045950 systemd[1]: Started kubelet.service. Mar 17 18:50:17.112960 kubelet[1848]: E0317 18:50:17.112905 1848 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:50:17.114892 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:50:17.115049 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:50:27.206292 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 17 18:50:27.206462 systemd[1]: Stopped kubelet.service. Mar 17 18:50:27.208208 systemd[1]: Starting kubelet.service... Mar 17 18:50:27.305322 systemd[1]: Started kubelet.service. Mar 17 18:50:27.364187 kubelet[1863]: E0317 18:50:27.364119 1863 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:50:27.366343 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:50:27.366507 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:50:35.825891 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Mar 17 18:50:37.456303 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 17 18:50:37.456483 systemd[1]: Stopped kubelet.service. Mar 17 18:50:37.458149 systemd[1]: Starting kubelet.service... Mar 17 18:50:37.546085 systemd[1]: Started kubelet.service. Mar 17 18:50:37.602601 kubelet[1879]: E0317 18:50:37.602560 1879 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:50:37.604640 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:50:37.604799 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:50:40.978371 update_engine[1550]: I0317 18:50:40.977926 1550 update_attempter.cc:509] Updating boot flags... Mar 17 18:50:44.598598 systemd[1]: Created slice system-sshd.slice. Mar 17 18:50:44.600121 systemd[1]: Started sshd@0-10.200.20.12:22-10.200.16.10:41238.service. Mar 17 18:50:45.238941 sshd[1925]: Accepted publickey for core from 10.200.16.10 port 41238 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:50:45.253936 sshd[1925]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:50:45.258844 systemd[1]: Started session-3.scope. Mar 17 18:50:45.260009 systemd-logind[1549]: New session 3 of user core. Mar 17 18:50:45.631762 systemd[1]: Started sshd@1-10.200.20.12:22-10.200.16.10:41244.service. Mar 17 18:50:46.105324 sshd[1930]: Accepted publickey for core from 10.200.16.10 port 41244 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:50:46.106760 sshd[1930]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:50:46.111113 systemd-logind[1549]: New session 4 of user core. Mar 17 18:50:46.111599 systemd[1]: Started session-4.scope. Mar 17 18:50:46.464101 sshd[1930]: pam_unix(sshd:session): session closed for user core Mar 17 18:50:46.467487 systemd-logind[1549]: Session 4 logged out. Waiting for processes to exit. Mar 17 18:50:46.467793 systemd[1]: sshd@1-10.200.20.12:22-10.200.16.10:41244.service: Deactivated successfully. Mar 17 18:50:46.468610 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 18:50:46.469773 systemd-logind[1549]: Removed session 4. Mar 17 18:50:46.538040 systemd[1]: Started sshd@2-10.200.20.12:22-10.200.16.10:41246.service. Mar 17 18:50:46.984478 sshd[1937]: Accepted publickey for core from 10.200.16.10 port 41246 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:50:46.986250 sshd[1937]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:50:46.990754 systemd[1]: Started session-5.scope. Mar 17 18:50:46.992061 systemd-logind[1549]: New session 5 of user core. Mar 17 18:50:47.321947 sshd[1937]: pam_unix(sshd:session): session closed for user core Mar 17 18:50:47.325132 systemd[1]: sshd@2-10.200.20.12:22-10.200.16.10:41246.service: Deactivated successfully. Mar 17 18:50:47.325969 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 18:50:47.327028 systemd-logind[1549]: Session 5 logged out. Waiting for processes to exit. Mar 17 18:50:47.328040 systemd-logind[1549]: Removed session 5. Mar 17 18:50:47.400936 systemd[1]: Started sshd@3-10.200.20.12:22-10.200.16.10:41258.service. Mar 17 18:50:47.706278 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 17 18:50:47.706456 systemd[1]: Stopped kubelet.service. Mar 17 18:50:47.708154 systemd[1]: Starting kubelet.service... Mar 17 18:50:47.874667 systemd[1]: Started kubelet.service. Mar 17 18:50:47.888900 sshd[1944]: Accepted publickey for core from 10.200.16.10 port 41258 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:50:47.891575 sshd[1944]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:50:47.896642 systemd[1]: Started session-6.scope. Mar 17 18:50:47.897946 systemd-logind[1549]: New session 6 of user core. Mar 17 18:50:47.939259 kubelet[1957]: E0317 18:50:47.939199 1957 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:50:47.941227 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:50:47.941384 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:50:48.237481 sshd[1944]: pam_unix(sshd:session): session closed for user core Mar 17 18:50:48.240447 systemd-logind[1549]: Session 6 logged out. Waiting for processes to exit. Mar 17 18:50:48.241327 systemd[1]: sshd@3-10.200.20.12:22-10.200.16.10:41258.service: Deactivated successfully. Mar 17 18:50:48.242203 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 18:50:48.242943 systemd-logind[1549]: Removed session 6. Mar 17 18:50:48.319251 systemd[1]: Started sshd@4-10.200.20.12:22-10.200.16.10:41264.service. Mar 17 18:50:48.807684 sshd[1969]: Accepted publickey for core from 10.200.16.10 port 41264 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:50:48.809487 sshd[1969]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:50:48.814106 systemd[1]: Started session-7.scope. Mar 17 18:50:48.814480 systemd-logind[1549]: New session 7 of user core. Mar 17 18:50:49.310292 sudo[1973]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 17 18:50:49.310527 sudo[1973]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Mar 17 18:50:49.353111 dbus-daemon[1531]: avc: received setenforce notice (enforcing=1) Mar 17 18:50:49.355130 sudo[1973]: pam_unix(sudo:session): session closed for user root Mar 17 18:50:49.445180 sshd[1969]: pam_unix(sshd:session): session closed for user core Mar 17 18:50:49.449147 systemd-logind[1549]: Session 7 logged out. Waiting for processes to exit. Mar 17 18:50:49.449359 systemd[1]: sshd@4-10.200.20.12:22-10.200.16.10:41264.service: Deactivated successfully. Mar 17 18:50:49.450217 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 18:50:49.451014 systemd-logind[1549]: Removed session 7. Mar 17 18:50:49.520455 systemd[1]: Started sshd@5-10.200.20.12:22-10.200.16.10:37952.service. Mar 17 18:50:49.995129 sshd[1977]: Accepted publickey for core from 10.200.16.10 port 37952 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:50:49.996765 sshd[1977]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:50:50.001779 systemd[1]: Started session-8.scope. Mar 17 18:50:50.002215 systemd-logind[1549]: New session 8 of user core. Mar 17 18:50:50.262976 sudo[1982]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 17 18:50:50.264100 sudo[1982]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Mar 17 18:50:50.267519 sudo[1982]: pam_unix(sudo:session): session closed for user root Mar 17 18:50:50.273246 sudo[1981]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 17 18:50:50.273492 sudo[1981]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Mar 17 18:50:50.284152 systemd[1]: Stopping audit-rules.service... Mar 17 18:50:50.284000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Mar 17 18:50:50.290492 kernel: kauditd_printk_skb: 84 callbacks suppressed Mar 17 18:50:50.290583 kernel: audit: type=1305 audit(1742237450.284:163): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Mar 17 18:50:50.290916 auditctl[1985]: No rules Mar 17 18:50:50.291460 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 18:50:50.291731 systemd[1]: Stopped audit-rules.service. Mar 17 18:50:50.293733 systemd[1]: Starting audit-rules.service... Mar 17 18:50:50.284000 audit[1985]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd950af60 a2=420 a3=0 items=0 ppid=1 pid=1985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:50:50.325077 augenrules[2003]: No rules Mar 17 18:50:50.330129 kernel: audit: type=1300 audit(1742237450.284:163): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd950af60 a2=420 a3=0 items=0 ppid=1 pid=1985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:50:50.326421 systemd[1]: Finished audit-rules.service. Mar 17 18:50:50.327745 sudo[1981]: pam_unix(sudo:session): session closed for user root Mar 17 18:50:50.284000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Mar 17 18:50:50.339312 kernel: audit: type=1327 audit(1742237450.284:163): proctitle=2F7362696E2F617564697463746C002D44 Mar 17 18:50:50.289000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:50:50.362093 kernel: audit: type=1131 audit(1742237450.289:164): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:50:50.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:50:50.378734 kernel: audit: type=1130 audit(1742237450.322:165): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:50:50.322000 audit[1981]: USER_END pid=1981 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Mar 17 18:50:50.402667 kernel: audit: type=1106 audit(1742237450.322:166): pid=1981 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Mar 17 18:50:50.402715 kernel: audit: type=1104 audit(1742237450.327:167): pid=1981 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Mar 17 18:50:50.327000 audit[1981]: CRED_DISP pid=1981 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Mar 17 18:50:50.421092 sshd[1977]: pam_unix(sshd:session): session closed for user core Mar 17 18:50:50.421000 audit[1977]: USER_END pid=1977 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:50:50.425157 systemd[1]: sshd@5-10.200.20.12:22-10.200.16.10:37952.service: Deactivated successfully. Mar 17 18:50:50.425996 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 18:50:50.426790 systemd-logind[1549]: Session 8 logged out. Waiting for processes to exit. Mar 17 18:50:50.427909 systemd-logind[1549]: Removed session 8. Mar 17 18:50:50.451115 kernel: audit: type=1106 audit(1742237450.421:168): pid=1977 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:50:50.421000 audit[1977]: CRED_DISP pid=1977 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:50:50.470820 kernel: audit: type=1104 audit(1742237450.421:169): pid=1977 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:50:50.421000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.20.12:22-10.200.16.10:37952 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:50:50.492724 kernel: audit: type=1131 audit(1742237450.421:170): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.20.12:22-10.200.16.10:37952 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:50:50.523604 systemd[1]: Started sshd@6-10.200.20.12:22-10.200.16.10:37968.service. Mar 17 18:50:50.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.12:22-10.200.16.10:37968 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:50:50.969000 audit[2010]: USER_ACCT pid=2010 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:50:50.970305 sshd[2010]: Accepted publickey for core from 10.200.16.10 port 37968 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:50:50.970000 audit[2010]: CRED_ACQ pid=2010 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:50:50.970000 audit[2010]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd652b570 a2=3 a3=1 items=0 ppid=1 pid=2010 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:50:50.970000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:50:50.972044 sshd[2010]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:50:50.976524 systemd[1]: Started session-9.scope. Mar 17 18:50:50.977736 systemd-logind[1549]: New session 9 of user core. Mar 17 18:50:50.982000 audit[2010]: USER_START pid=2010 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:50:50.983000 audit[2013]: CRED_ACQ pid=2013 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:50:51.223000 audit[2014]: USER_ACCT pid=2014 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Mar 17 18:50:51.223000 audit[2014]: CRED_REFR pid=2014 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Mar 17 18:50:51.224461 sudo[2014]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 18:50:51.224695 sudo[2014]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Mar 17 18:50:51.225000 audit[2014]: USER_START pid=2014 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Mar 17 18:50:51.250316 systemd[1]: Starting docker.service... Mar 17 18:50:51.293681 env[2024]: time="2025-03-17T18:50:51.293629824Z" level=info msg="Starting up" Mar 17 18:50:51.299179 env[2024]: time="2025-03-17T18:50:51.299143902Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 17 18:50:51.299317 env[2024]: time="2025-03-17T18:50:51.299303862Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 17 18:50:51.299383 env[2024]: time="2025-03-17T18:50:51.299368342Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Mar 17 18:50:51.299451 env[2024]: time="2025-03-17T18:50:51.299438422Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 17 18:50:51.301596 env[2024]: time="2025-03-17T18:50:51.301553101Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 17 18:50:51.301596 env[2024]: time="2025-03-17T18:50:51.301587301Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 17 18:50:51.301786 env[2024]: time="2025-03-17T18:50:51.301612301Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Mar 17 18:50:51.301786 env[2024]: time="2025-03-17T18:50:51.301623501Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 17 18:50:51.397197 env[2024]: time="2025-03-17T18:50:51.397154351Z" level=warning msg="Your kernel does not support cgroup blkio weight" Mar 17 18:50:51.397197 env[2024]: time="2025-03-17T18:50:51.397187311Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Mar 17 18:50:51.397433 env[2024]: time="2025-03-17T18:50:51.397357110Z" level=info msg="Loading containers: start." Mar 17 18:50:51.465000 audit[2052]: NETFILTER_CFG table=nat:5 family=2 entries=2 op=nft_register_chain pid=2052 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:50:51.465000 audit[2052]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=fffff3f4e0f0 a2=0 a3=1 items=0 ppid=2024 pid=2052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:50:51.465000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Mar 17 18:50:51.467000 audit[2054]: NETFILTER_CFG table=filter:6 family=2 entries=2 op=nft_register_chain pid=2054 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:50:51.467000 audit[2054]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffd81c4f10 a2=0 a3=1 items=0 ppid=2024 pid=2054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:50:51.467000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Mar 17 18:50:51.469000 audit[2056]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2056 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:50:51.469000 audit[2056]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffe9b8a7a0 a2=0 a3=1 items=0 ppid=2024 pid=2056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:50:51.469000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Mar 17 18:50:51.471000 audit[2058]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2058 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:50:51.471000 audit[2058]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffd537d7e0 a2=0 a3=1 items=0 ppid=2024 pid=2058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:50:51.471000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Mar 17 18:50:51.473000 audit[2060]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=2060 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:50:51.473000 audit[2060]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffd1728df0 a2=0 a3=1 items=0 ppid=2024 pid=2060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:50:51.473000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Mar 17 18:50:51.475000 audit[2062]: NETFILTER_CFG table=filter:10 family=2 entries=1 op=nft_register_rule pid=2062 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:50:51.475000 audit[2062]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffd893a540 a2=0 a3=1 items=0 ppid=2024 pid=2062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:50:51.475000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Mar 17 18:50:51.498000 audit[2064]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_register_chain pid=2064 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:50:51.498000 audit[2064]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe83ebff0 a2=0 a3=1 items=0 ppid=2024 pid=2064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:50:51.498000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Mar 17 18:50:51.500000 audit[2066]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2066 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:50:51.500000 audit[2066]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffd17f3cc0 a2=0 a3=1 items=0 ppid=2024 pid=2066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:50:51.500000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Mar 17 18:50:51.502000 audit[2068]: NETFILTER_CFG table=filter:13 family=2 entries=2 op=nft_register_chain pid=2068 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:50:51.502000 audit[2068]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=308 a0=3 a1=ffffe5938c00 a2=0 a3=1 items=0 ppid=2024 pid=2068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:50:51.502000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Mar 17 18:50:51.523000 audit[2072]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_unregister_rule pid=2072 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:50:51.523000 audit[2072]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffe277c2c0 a2=0 a3=1 items=0 ppid=2024 pid=2072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:50:51.523000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Mar 17 18:50:51.530000 audit[2073]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=2073 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:50:51.530000 audit[2073]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=fffffb76e7e0 a2=0 a3=1 items=0 ppid=2024 pid=2073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:50:51.530000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Mar 17 18:50:51.592897 kernel: Initializing XFRM netlink socket Mar 17 18:50:51.635080 env[2024]: time="2025-03-17T18:50:51.635021115Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Mar 17 18:50:51.717000 audit[2081]: NETFILTER_CFG table=nat:16 family=2 entries=2 op=nft_register_chain pid=2081 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:50:51.717000 audit[2081]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=492 a0=3 a1=ffffeffd9700 a2=0 a3=1 items=0 ppid=2024 pid=2081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:50:51.717000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Mar 17 18:50:51.732000 audit[2084]: NETFILTER_CFG table=nat:17 family=2 entries=1 op=nft_register_rule pid=2084 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:50:51.732000 audit[2084]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffebbcc6d0 a2=0 a3=1 items=0 ppid=2024 pid=2084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:50:51.732000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Mar 17 18:50:51.736000 audit[2087]: NETFILTER_CFG table=filter:18 family=2 entries=1 op=nft_register_rule pid=2087 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:50:51.736000 audit[2087]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffd1677e60 a2=0 a3=1 items=0 ppid=2024 pid=2087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:50:51.736000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Mar 17 18:50:51.739000 audit[2089]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=2089 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:50:51.739000 audit[2089]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffcd713340 a2=0 a3=1 items=0 ppid=2024 pid=2089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:50:51.739000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Mar 17 18:50:51.742000 audit[2091]: NETFILTER_CFG table=nat:20 family=2 entries=2 op=nft_register_chain pid=2091 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:50:51.742000 audit[2091]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=356 a0=3 a1=ffffff901680 a2=0 a3=1 items=0 ppid=2024 pid=2091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:50:51.742000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Mar 17 18:50:51.744000 audit[2093]: NETFILTER_CFG table=nat:21 family=2 entries=2 op=nft_register_chain pid=2093 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:50:51.744000 audit[2093]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=444 a0=3 a1=ffffc2c2c6c0 a2=0 a3=1 items=0 ppid=2024 pid=2093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:50:51.744000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Mar 17 18:50:51.747000 audit[2095]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=2095 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:50:51.747000 audit[2095]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=304 a0=3 a1=fffff6d0a030 a2=0 a3=1 items=0 ppid=2024 pid=2095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:50:51.747000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Mar 17 18:50:51.749000 audit[2097]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=2097 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:50:51.749000 audit[2097]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=508 a0=3 a1=ffffcc937df0 a2=0 a3=1 items=0 ppid=2024 pid=2097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:50:51.749000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Mar 17 18:50:51.751000 audit[2099]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_register_rule pid=2099 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:50:51.751000 audit[2099]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=240 a0=3 a1=ffffca05ce40 a2=0 a3=1 items=0 ppid=2024 pid=2099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:50:51.751000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Mar 17 18:50:51.753000 audit[2101]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=2101 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:50:51.753000 audit[2101]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=fffff55d4a20 a2=0 a3=1 items=0 ppid=2024 pid=2101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:50:51.753000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Mar 17 18:50:51.756000 audit[2103]: NETFILTER_CFG table=filter:26 family=2 entries=1 op=nft_register_rule pid=2103 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:50:51.756000 audit[2103]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffde93b500 a2=0 a3=1 items=0 ppid=2024 pid=2103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:50:51.756000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Mar 17 18:50:51.757686 systemd-networkd[1734]: docker0: Link UP Mar 17 18:50:51.786000 audit[2107]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_unregister_rule pid=2107 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:50:51.786000 audit[2107]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffeac975d0 a2=0 a3=1 items=0 ppid=2024 pid=2107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:50:51.786000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Mar 17 18:50:51.794000 audit[2108]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_rule pid=2108 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:50:51.794000 audit[2108]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffeb434bd0 a2=0 a3=1 items=0 ppid=2024 pid=2108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:50:51.794000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Mar 17 18:50:51.795658 env[2024]: time="2025-03-17T18:50:51.795611463Z" level=info msg="Loading containers: done." Mar 17 18:50:51.809190 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck855014448-merged.mount: Deactivated successfully. Mar 17 18:50:51.863549 env[2024]: time="2025-03-17T18:50:51.863496962Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 17 18:50:51.864037 env[2024]: time="2025-03-17T18:50:51.864014561Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Mar 17 18:50:51.864285 env[2024]: time="2025-03-17T18:50:51.864260601Z" level=info msg="Daemon has completed initialization" Mar 17 18:50:51.911172 systemd[1]: Started docker.service. Mar 17 18:50:51.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:50:51.914062 env[2024]: time="2025-03-17T18:50:51.913981666Z" level=info msg="API listen on /run/docker.sock" Mar 17 18:50:55.995606 env[1566]: time="2025-03-17T18:50:55.995279012Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\"" Mar 17 18:50:57.956243 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Mar 17 18:50:57.956432 systemd[1]: Stopped kubelet.service. Mar 17 18:50:57.955000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:50:57.962962 kernel: kauditd_printk_skb: 84 callbacks suppressed Mar 17 18:50:57.963091 kernel: audit: type=1130 audit(1742237457.955:205): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:50:57.961202 systemd[1]: Starting kubelet.service... Mar 17 18:50:57.955000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:50:58.001416 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2446521616.mount: Deactivated successfully. Mar 17 18:50:58.002662 kernel: audit: type=1131 audit(1742237457.955:206): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:50:58.346342 systemd[1]: Started kubelet.service. Mar 17 18:50:58.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:50:58.372025 kernel: audit: type=1130 audit(1742237458.345:207): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:50:58.408567 kubelet[2155]: E0317 18:50:58.408506 2155 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:50:58.410550 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:50:58.410700 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:50:58.410000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Mar 17 18:50:58.436041 kernel: audit: type=1131 audit(1742237458.410:208): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Mar 17 18:51:00.227978 env[1566]: time="2025-03-17T18:51:00.227919363Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:00.238102 env[1566]: time="2025-03-17T18:51:00.238050084Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fcbef283ab16167d1ca4acb66836af518e9fe445111fbc618fdbe196858f9530,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:00.244206 env[1566]: time="2025-03-17T18:51:00.244147364Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:00.249652 env[1566]: time="2025-03-17T18:51:00.249602986Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:00.250639 env[1566]: time="2025-03-17T18:51:00.250606087Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\" returns image reference \"sha256:fcbef283ab16167d1ca4acb66836af518e9fe445111fbc618fdbe196858f9530\"" Mar 17 18:51:00.260405 env[1566]: time="2025-03-17T18:51:00.260366166Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\"" Mar 17 18:51:02.447910 env[1566]: time="2025-03-17T18:51:02.447844133Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:02.455956 env[1566]: time="2025-03-17T18:51:02.455905261Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9469d949b9e8c03b6cb06af513f683dd2975b57092f3deb2a9e125e0d05188d3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:02.462591 env[1566]: time="2025-03-17T18:51:02.462539682Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:02.468083 env[1566]: time="2025-03-17T18:51:02.468017692Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:02.468980 env[1566]: time="2025-03-17T18:51:02.468928047Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\" returns image reference \"sha256:9469d949b9e8c03b6cb06af513f683dd2975b57092f3deb2a9e125e0d05188d3\"" Mar 17 18:51:02.478792 env[1566]: time="2025-03-17T18:51:02.478747611Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\"" Mar 17 18:51:04.104703 env[1566]: time="2025-03-17T18:51:04.104644629Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:04.112764 env[1566]: time="2025-03-17T18:51:04.112699598Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3540cd10f52fac0a58ba43c004c6d3941e2a9f53e06440b982b9c130a72c0213,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:04.119429 env[1566]: time="2025-03-17T18:51:04.119388527Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:04.124971 env[1566]: time="2025-03-17T18:51:04.124928757Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:04.125884 env[1566]: time="2025-03-17T18:51:04.125827918Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\" returns image reference \"sha256:3540cd10f52fac0a58ba43c004c6d3941e2a9f53e06440b982b9c130a72c0213\"" Mar 17 18:51:04.135547 env[1566]: time="2025-03-17T18:51:04.135490864Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\"" Mar 17 18:51:05.301341 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount349742848.mount: Deactivated successfully. Mar 17 18:51:05.963277 env[1566]: time="2025-03-17T18:51:05.963209577Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:05.973251 env[1566]: time="2025-03-17T18:51:05.973181480Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fe83790bf8a35411788b67fe5f0ce35309056c40530484d516af2ca01375220c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:05.980584 env[1566]: time="2025-03-17T18:51:05.980521609Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:05.987860 env[1566]: time="2025-03-17T18:51:05.987817902Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:05.988341 env[1566]: time="2025-03-17T18:51:05.988305380Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\" returns image reference \"sha256:fe83790bf8a35411788b67fe5f0ce35309056c40530484d516af2ca01375220c\"" Mar 17 18:51:05.998765 env[1566]: time="2025-03-17T18:51:05.998721484Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 17 18:51:06.761730 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1363425871.mount: Deactivated successfully. Mar 17 18:51:08.456303 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Mar 17 18:51:08.455000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:08.456480 systemd[1]: Stopped kubelet.service. Mar 17 18:51:08.458005 systemd[1]: Starting kubelet.service... Mar 17 18:51:08.455000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:08.493184 kernel: audit: type=1130 audit(1742237468.455:209): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:08.493314 kernel: audit: type=1131 audit(1742237468.455:210): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:09.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:09.556743 systemd[1]: Started kubelet.service. Mar 17 18:51:09.576924 kernel: audit: type=1130 audit(1742237469.556:211): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:09.604373 kubelet[2192]: E0317 18:51:09.604338 2192 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:51:09.606126 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:51:09.606265 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:51:09.605000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Mar 17 18:51:09.625902 kernel: audit: type=1131 audit(1742237469.605:212): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Mar 17 18:51:10.040895 env[1566]: time="2025-03-17T18:51:10.040325152Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:10.051989 env[1566]: time="2025-03-17T18:51:10.051934119Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:10.060573 env[1566]: time="2025-03-17T18:51:10.059627220Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:10.068645 env[1566]: time="2025-03-17T18:51:10.068608504Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:10.069672 env[1566]: time="2025-03-17T18:51:10.069645506Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Mar 17 18:51:10.079381 env[1566]: time="2025-03-17T18:51:10.079355215Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Mar 17 18:51:10.799210 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2093625568.mount: Deactivated successfully. Mar 17 18:51:10.835666 env[1566]: time="2025-03-17T18:51:10.835598428Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:10.847736 env[1566]: time="2025-03-17T18:51:10.847694678Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:10.855105 env[1566]: time="2025-03-17T18:51:10.855056044Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:10.860840 env[1566]: time="2025-03-17T18:51:10.860801171Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:10.861361 env[1566]: time="2025-03-17T18:51:10.861321732Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Mar 17 18:51:10.870389 env[1566]: time="2025-03-17T18:51:10.870352813Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Mar 17 18:51:11.672665 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2743029654.mount: Deactivated successfully. Mar 17 18:51:14.523833 env[1566]: time="2025-03-17T18:51:14.523775695Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:14.536224 env[1566]: time="2025-03-17T18:51:14.536165935Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:14.540440 env[1566]: time="2025-03-17T18:51:14.540393208Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:14.548965 env[1566]: time="2025-03-17T18:51:14.548924749Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:14.550645 env[1566]: time="2025-03-17T18:51:14.550608555Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Mar 17 18:51:19.706298 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Mar 17 18:51:19.706560 systemd[1]: Stopped kubelet.service. Mar 17 18:51:19.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:19.708289 systemd[1]: Starting kubelet.service... Mar 17 18:51:19.705000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:19.760981 kernel: audit: type=1130 audit(1742237479.705:213): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:19.761096 kernel: audit: type=1131 audit(1742237479.705:214): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:19.991243 systemd[1]: Started kubelet.service. Mar 17 18:51:19.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:20.025906 kernel: audit: type=1130 audit(1742237479.990:215): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:20.070611 kubelet[2276]: E0317 18:51:20.070556 2276 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:51:20.072328 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:51:20.072478 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:51:20.071000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Mar 17 18:51:20.094893 kernel: audit: type=1131 audit(1742237480.071:216): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Mar 17 18:51:21.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:21.549327 systemd[1]: Stopped kubelet.service. Mar 17 18:51:21.551745 systemd[1]: Starting kubelet.service... Mar 17 18:51:21.548000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:21.598359 kernel: audit: type=1130 audit(1742237481.548:217): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:21.598482 kernel: audit: type=1131 audit(1742237481.548:218): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:21.612185 systemd[1]: Reloading. Mar 17 18:51:21.706157 /usr/lib/systemd/system-generators/torcx-generator[2318]: time="2025-03-17T18:51:21Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:51:21.706188 /usr/lib/systemd/system-generators/torcx-generator[2318]: time="2025-03-17T18:51:21Z" level=info msg="torcx already run" Mar 17 18:51:21.795229 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:51:21.795256 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:51:21.811545 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:51:21.910578 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 17 18:51:21.910838 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 17 18:51:21.911264 systemd[1]: Stopped kubelet.service. Mar 17 18:51:21.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Mar 17 18:51:21.923758 systemd[1]: Starting kubelet.service... Mar 17 18:51:21.935935 kernel: audit: type=1130 audit(1742237481.910:219): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Mar 17 18:51:22.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:22.022037 systemd[1]: Started kubelet.service. Mar 17 18:51:22.048892 kernel: audit: type=1130 audit(1742237482.021:220): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:22.086815 kubelet[2389]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:51:22.086815 kubelet[2389]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 18:51:22.086815 kubelet[2389]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:51:22.087385 kubelet[2389]: I0317 18:51:22.086797 2389 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 18:51:23.619689 kubelet[2389]: I0317 18:51:23.619634 2389 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 18:51:23.619689 kubelet[2389]: I0317 18:51:23.619673 2389 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 18:51:23.620176 kubelet[2389]: I0317 18:51:23.619952 2389 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 18:51:23.632798 kubelet[2389]: E0317 18:51:23.632766 2389 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.12:6443: connect: connection refused Mar 17 18:51:23.633221 kubelet[2389]: I0317 18:51:23.633200 2389 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 18:51:23.643139 kubelet[2389]: I0317 18:51:23.643105 2389 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 18:51:23.644789 kubelet[2389]: I0317 18:51:23.644729 2389 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 18:51:23.645188 kubelet[2389]: I0317 18:51:23.644992 2389 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.7-a-c36c8d7be6","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 18:51:23.645339 kubelet[2389]: I0317 18:51:23.645324 2389 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 18:51:23.645398 kubelet[2389]: I0317 18:51:23.645390 2389 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 18:51:23.645591 kubelet[2389]: I0317 18:51:23.645578 2389 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:51:23.646561 kubelet[2389]: I0317 18:51:23.646542 2389 kubelet.go:400] "Attempting to sync node with API server" Mar 17 18:51:23.646660 kubelet[2389]: I0317 18:51:23.646649 2389 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 18:51:23.646753 kubelet[2389]: I0317 18:51:23.646743 2389 kubelet.go:312] "Adding apiserver pod source" Mar 17 18:51:23.646827 kubelet[2389]: I0317 18:51:23.646818 2389 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 18:51:23.649101 kubelet[2389]: I0317 18:51:23.649071 2389 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Mar 17 18:51:23.649280 kubelet[2389]: I0317 18:51:23.649259 2389 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 18:51:23.649331 kubelet[2389]: W0317 18:51:23.649312 2389 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 18:51:23.649926 kubelet[2389]: I0317 18:51:23.649898 2389 server.go:1264] "Started kubelet" Mar 17 18:51:23.650122 kubelet[2389]: W0317 18:51:23.650066 2389 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-a-c36c8d7be6&limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Mar 17 18:51:23.650180 kubelet[2389]: E0317 18:51:23.650138 2389 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-a-c36c8d7be6&limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Mar 17 18:51:23.659500 kubelet[2389]: W0317 18:51:23.659451 2389 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.12:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Mar 17 18:51:23.659694 kubelet[2389]: E0317 18:51:23.659682 2389 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.12:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Mar 17 18:51:23.659956 kubelet[2389]: E0317 18:51:23.659808 2389 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.12:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.12:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.7-a-c36c8d7be6.182dabc2b751d68f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.7-a-c36c8d7be6,UID:ci-3510.3.7-a-c36c8d7be6,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.7-a-c36c8d7be6,},FirstTimestamp:2025-03-17 18:51:23.649848975 +0000 UTC m=+1.616156592,LastTimestamp:2025-03-17 18:51:23.649848975 +0000 UTC m=+1.616156592,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.7-a-c36c8d7be6,}" Mar 17 18:51:23.659000 audit[2389]: AVC avc: denied { mac_admin } for pid=2389 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:51:23.662910 kubelet[2389]: E0317 18:51:23.662881 2389 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 18:51:23.663220 kubelet[2389]: I0317 18:51:23.663173 2389 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 18:51:23.663577 kubelet[2389]: I0317 18:51:23.663558 2389 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 18:51:23.663703 kubelet[2389]: I0317 18:51:23.663680 2389 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 18:51:23.664657 kubelet[2389]: I0317 18:51:23.664635 2389 server.go:455] "Adding debug handlers to kubelet server" Mar 17 18:51:23.659000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Mar 17 18:51:23.692367 kernel: audit: type=1400 audit(1742237483.659:221): avc: denied { mac_admin } for pid=2389 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:51:23.692528 kernel: audit: type=1401 audit(1742237483.659:221): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Mar 17 18:51:23.692561 kubelet[2389]: I0317 18:51:23.692410 2389 kubelet.go:1419] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Mar 17 18:51:23.692561 kubelet[2389]: I0317 18:51:23.692501 2389 kubelet.go:1423] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Mar 17 18:51:23.659000 audit[2389]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=400056df20 a1=4000b78468 a2=400056def0 a3=25 items=0 ppid=1 pid=2389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:23.659000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Mar 17 18:51:23.691000 audit[2389]: AVC avc: denied { mac_admin } for pid=2389 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:51:23.691000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Mar 17 18:51:23.691000 audit[2389]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=40006e2000 a1=4000bba000 a2=4000c38090 a3=25 items=0 ppid=1 pid=2389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:23.691000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Mar 17 18:51:23.693004 kubelet[2389]: I0317 18:51:23.692984 2389 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 18:51:23.694029 kubelet[2389]: I0317 18:51:23.693992 2389 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 18:51:23.694658 kubelet[2389]: I0317 18:51:23.694620 2389 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 18:51:23.695846 kubelet[2389]: I0317 18:51:23.695819 2389 factory.go:221] Registration of the systemd container factory successfully Mar 17 18:51:23.696072 kubelet[2389]: I0317 18:51:23.696051 2389 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 18:51:23.696492 kubelet[2389]: E0317 18:51:23.696454 2389 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-a-c36c8d7be6?timeout=10s\": dial tcp 10.200.20.12:6443: connect: connection refused" interval="200ms" Mar 17 18:51:23.696766 kubelet[2389]: I0317 18:51:23.695925 2389 reconciler.go:26] "Reconciler: start to sync state" Mar 17 18:51:23.697792 kubelet[2389]: W0317 18:51:23.697727 2389 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Mar 17 18:51:23.697939 kubelet[2389]: E0317 18:51:23.697812 2389 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Mar 17 18:51:23.698587 kubelet[2389]: I0317 18:51:23.698564 2389 factory.go:221] Registration of the containerd container factory successfully Mar 17 18:51:23.700000 audit[2402]: NETFILTER_CFG table=mangle:29 family=2 entries=2 op=nft_register_chain pid=2402 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:51:23.700000 audit[2402]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffc2c75090 a2=0 a3=1 items=0 ppid=2389 pid=2402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:23.700000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Mar 17 18:51:23.708000 audit[2404]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_chain pid=2404 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:51:23.708000 audit[2404]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd7b70540 a2=0 a3=1 items=0 ppid=2389 pid=2404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:23.708000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Mar 17 18:51:23.711000 audit[2406]: NETFILTER_CFG table=filter:31 family=2 entries=2 op=nft_register_chain pid=2406 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:51:23.711000 audit[2406]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffc7eaabb0 a2=0 a3=1 items=0 ppid=2389 pid=2406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:23.711000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Mar 17 18:51:23.713000 audit[2408]: NETFILTER_CFG table=filter:32 family=2 entries=2 op=nft_register_chain pid=2408 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:51:23.713000 audit[2408]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffd890a340 a2=0 a3=1 items=0 ppid=2389 pid=2408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:23.713000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Mar 17 18:51:23.805000 audit[2412]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=2412 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:51:23.805000 audit[2412]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffdb809e00 a2=0 a3=1 items=0 ppid=2389 pid=2412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:23.805000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Mar 17 18:51:23.807043 kubelet[2389]: I0317 18:51:23.806989 2389 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 18:51:23.807000 audit[2413]: NETFILTER_CFG table=mangle:34 family=10 entries=2 op=nft_register_chain pid=2413 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:51:23.807000 audit[2413]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffff20e7750 a2=0 a3=1 items=0 ppid=2389 pid=2413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:23.807000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Mar 17 18:51:23.808619 kubelet[2389]: I0317 18:51:23.808592 2389 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 18:51:23.808727 kubelet[2389]: I0317 18:51:23.808718 2389 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 18:51:23.809009 kubelet[2389]: I0317 18:51:23.808975 2389 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 18:51:23.809105 kubelet[2389]: E0317 18:51:23.809059 2389 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 18:51:23.808000 audit[2414]: NETFILTER_CFG table=mangle:35 family=2 entries=1 op=nft_register_chain pid=2414 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:51:23.808000 audit[2414]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffedcf3200 a2=0 a3=1 items=0 ppid=2389 pid=2414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:23.808000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Mar 17 18:51:23.809819 kubelet[2389]: W0317 18:51:23.809792 2389 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Mar 17 18:51:23.809991 kubelet[2389]: E0317 18:51:23.809975 2389 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Mar 17 18:51:23.810000 audit[2416]: NETFILTER_CFG table=mangle:36 family=10 entries=1 op=nft_register_chain pid=2416 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:51:23.810000 audit[2416]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc8d866c0 a2=0 a3=1 items=0 ppid=2389 pid=2416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:23.810000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Mar 17 18:51:23.810000 audit[2417]: NETFILTER_CFG table=nat:37 family=2 entries=1 op=nft_register_chain pid=2417 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:51:23.810000 audit[2417]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe86c0370 a2=0 a3=1 items=0 ppid=2389 pid=2417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:23.810000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Mar 17 18:51:23.812000 audit[2418]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_chain pid=2418 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:51:23.812000 audit[2418]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc1a2cda0 a2=0 a3=1 items=0 ppid=2389 pid=2418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:23.812000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Mar 17 18:51:23.812000 audit[2419]: NETFILTER_CFG table=nat:39 family=10 entries=2 op=nft_register_chain pid=2419 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:51:23.812000 audit[2419]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=128 a0=3 a1=ffffd1a5cda0 a2=0 a3=1 items=0 ppid=2389 pid=2419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:23.812000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Mar 17 18:51:23.814000 audit[2420]: NETFILTER_CFG table=filter:40 family=10 entries=2 op=nft_register_chain pid=2420 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:51:23.814000 audit[2420]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffee554690 a2=0 a3=1 items=0 ppid=2389 pid=2420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:23.814000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Mar 17 18:51:23.824811 kubelet[2389]: I0317 18:51:23.824775 2389 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.7-a-c36c8d7be6" Mar 17 18:51:23.825610 kubelet[2389]: I0317 18:51:23.825583 2389 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 18:51:23.825610 kubelet[2389]: I0317 18:51:23.825608 2389 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 18:51:23.825740 kubelet[2389]: I0317 18:51:23.825633 2389 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:51:23.826069 kubelet[2389]: E0317 18:51:23.826039 2389 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.12:6443/api/v1/nodes\": dial tcp 10.200.20.12:6443: connect: connection refused" node="ci-3510.3.7-a-c36c8d7be6" Mar 17 18:51:23.833049 kubelet[2389]: I0317 18:51:23.833012 2389 policy_none.go:49] "None policy: Start" Mar 17 18:51:23.833913 kubelet[2389]: I0317 18:51:23.833847 2389 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 18:51:23.833913 kubelet[2389]: I0317 18:51:23.833897 2389 state_mem.go:35] "Initializing new in-memory state store" Mar 17 18:51:23.842330 kubelet[2389]: I0317 18:51:23.842295 2389 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 18:51:23.841000 audit[2389]: AVC avc: denied { mac_admin } for pid=2389 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:51:23.841000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Mar 17 18:51:23.841000 audit[2389]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000f333e0 a1=4000f385d0 a2=4000f333b0 a3=25 items=0 ppid=1 pid=2389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:23.841000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Mar 17 18:51:23.842578 kubelet[2389]: I0317 18:51:23.842395 2389 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Mar 17 18:51:23.842578 kubelet[2389]: I0317 18:51:23.842523 2389 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 18:51:23.842651 kubelet[2389]: I0317 18:51:23.842628 2389 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 18:51:23.847038 kubelet[2389]: E0317 18:51:23.846999 2389 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.7-a-c36c8d7be6\" not found" Mar 17 18:51:23.898080 kubelet[2389]: E0317 18:51:23.897964 2389 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-a-c36c8d7be6?timeout=10s\": dial tcp 10.200.20.12:6443: connect: connection refused" interval="400ms" Mar 17 18:51:23.910128 kubelet[2389]: I0317 18:51:23.910072 2389 topology_manager.go:215] "Topology Admit Handler" podUID="da937f9e89cf088a54c30093e8a24ca4" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.7-a-c36c8d7be6" Mar 17 18:51:23.912574 kubelet[2389]: I0317 18:51:23.912545 2389 topology_manager.go:215] "Topology Admit Handler" podUID="aebb7b8c12bd9c158f1fdb95ca1392dc" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.7-a-c36c8d7be6" Mar 17 18:51:23.914077 kubelet[2389]: I0317 18:51:23.914043 2389 topology_manager.go:215] "Topology Admit Handler" podUID="28c4b68fbc4f0109758cdd6332ed9dea" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.7-a-c36c8d7be6" Mar 17 18:51:23.998659 kubelet[2389]: I0317 18:51:23.998613 2389 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/28c4b68fbc4f0109758cdd6332ed9dea-kubeconfig\") pod \"kube-scheduler-ci-3510.3.7-a-c36c8d7be6\" (UID: \"28c4b68fbc4f0109758cdd6332ed9dea\") " pod="kube-system/kube-scheduler-ci-3510.3.7-a-c36c8d7be6" Mar 17 18:51:23.998659 kubelet[2389]: I0317 18:51:23.998653 2389 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/da937f9e89cf088a54c30093e8a24ca4-ca-certs\") pod \"kube-apiserver-ci-3510.3.7-a-c36c8d7be6\" (UID: \"da937f9e89cf088a54c30093e8a24ca4\") " pod="kube-system/kube-apiserver-ci-3510.3.7-a-c36c8d7be6" Mar 17 18:51:23.998849 kubelet[2389]: I0317 18:51:23.998676 2389 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/da937f9e89cf088a54c30093e8a24ca4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.7-a-c36c8d7be6\" (UID: \"da937f9e89cf088a54c30093e8a24ca4\") " pod="kube-system/kube-apiserver-ci-3510.3.7-a-c36c8d7be6" Mar 17 18:51:23.998849 kubelet[2389]: I0317 18:51:23.998701 2389 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aebb7b8c12bd9c158f1fdb95ca1392dc-ca-certs\") pod \"kube-controller-manager-ci-3510.3.7-a-c36c8d7be6\" (UID: \"aebb7b8c12bd9c158f1fdb95ca1392dc\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-a-c36c8d7be6" Mar 17 18:51:23.998849 kubelet[2389]: I0317 18:51:23.998725 2389 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/aebb7b8c12bd9c158f1fdb95ca1392dc-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.7-a-c36c8d7be6\" (UID: \"aebb7b8c12bd9c158f1fdb95ca1392dc\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-a-c36c8d7be6" Mar 17 18:51:23.998849 kubelet[2389]: I0317 18:51:23.998742 2389 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aebb7b8c12bd9c158f1fdb95ca1392dc-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.7-a-c36c8d7be6\" (UID: \"aebb7b8c12bd9c158f1fdb95ca1392dc\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-a-c36c8d7be6" Mar 17 18:51:23.998849 kubelet[2389]: I0317 18:51:23.998758 2389 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/da937f9e89cf088a54c30093e8a24ca4-k8s-certs\") pod \"kube-apiserver-ci-3510.3.7-a-c36c8d7be6\" (UID: \"da937f9e89cf088a54c30093e8a24ca4\") " pod="kube-system/kube-apiserver-ci-3510.3.7-a-c36c8d7be6" Mar 17 18:51:23.998985 kubelet[2389]: I0317 18:51:23.998773 2389 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aebb7b8c12bd9c158f1fdb95ca1392dc-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.7-a-c36c8d7be6\" (UID: \"aebb7b8c12bd9c158f1fdb95ca1392dc\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-a-c36c8d7be6" Mar 17 18:51:23.998985 kubelet[2389]: I0317 18:51:23.998790 2389 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/aebb7b8c12bd9c158f1fdb95ca1392dc-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.7-a-c36c8d7be6\" (UID: \"aebb7b8c12bd9c158f1fdb95ca1392dc\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-a-c36c8d7be6" Mar 17 18:51:24.028186 kubelet[2389]: I0317 18:51:24.028153 2389 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.7-a-c36c8d7be6" Mar 17 18:51:24.028608 kubelet[2389]: E0317 18:51:24.028581 2389 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.12:6443/api/v1/nodes\": dial tcp 10.200.20.12:6443: connect: connection refused" node="ci-3510.3.7-a-c36c8d7be6" Mar 17 18:51:24.219758 env[1566]: time="2025-03-17T18:51:24.219709703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.7-a-c36c8d7be6,Uid:da937f9e89cf088a54c30093e8a24ca4,Namespace:kube-system,Attempt:0,}" Mar 17 18:51:24.225467 env[1566]: time="2025-03-17T18:51:24.224902108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.7-a-c36c8d7be6,Uid:aebb7b8c12bd9c158f1fdb95ca1392dc,Namespace:kube-system,Attempt:0,}" Mar 17 18:51:24.225467 env[1566]: time="2025-03-17T18:51:24.225149895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.7-a-c36c8d7be6,Uid:28c4b68fbc4f0109758cdd6332ed9dea,Namespace:kube-system,Attempt:0,}" Mar 17 18:51:24.298790 kubelet[2389]: E0317 18:51:24.298741 2389 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-a-c36c8d7be6?timeout=10s\": dial tcp 10.200.20.12:6443: connect: connection refused" interval="800ms" Mar 17 18:51:24.431119 kubelet[2389]: I0317 18:51:24.431083 2389 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.7-a-c36c8d7be6" Mar 17 18:51:24.431501 kubelet[2389]: E0317 18:51:24.431445 2389 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.12:6443/api/v1/nodes\": dial tcp 10.200.20.12:6443: connect: connection refused" node="ci-3510.3.7-a-c36c8d7be6" Mar 17 18:51:24.498275 kubelet[2389]: W0317 18:51:24.498111 2389 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.12:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Mar 17 18:51:24.498275 kubelet[2389]: E0317 18:51:24.498171 2389 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.12:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Mar 17 18:51:24.727487 kubelet[2389]: W0317 18:51:24.727440 2389 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Mar 17 18:51:24.727487 kubelet[2389]: E0317 18:51:24.727490 2389 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Mar 17 18:51:24.805660 kubelet[2389]: E0317 18:51:24.805457 2389 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.12:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.12:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.7-a-c36c8d7be6.182dabc2b751d68f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.7-a-c36c8d7be6,UID:ci-3510.3.7-a-c36c8d7be6,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.7-a-c36c8d7be6,},FirstTimestamp:2025-03-17 18:51:23.649848975 +0000 UTC m=+1.616156592,LastTimestamp:2025-03-17 18:51:23.649848975 +0000 UTC m=+1.616156592,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.7-a-c36c8d7be6,}" Mar 17 18:51:25.004817 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2121606920.mount: Deactivated successfully. Mar 17 18:51:25.056962 env[1566]: time="2025-03-17T18:51:25.056819458Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:25.064994 env[1566]: time="2025-03-17T18:51:25.064937918Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:25.082134 env[1566]: time="2025-03-17T18:51:25.082069754Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:25.086677 env[1566]: time="2025-03-17T18:51:25.086635278Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:25.093113 env[1566]: time="2025-03-17T18:51:25.093071426Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:25.099668 kubelet[2389]: E0317 18:51:25.099606 2389 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-a-c36c8d7be6?timeout=10s\": dial tcp 10.200.20.12:6443: connect: connection refused" interval="1.6s" Mar 17 18:51:25.109228 env[1566]: time="2025-03-17T18:51:25.109184073Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:25.114842 env[1566]: time="2025-03-17T18:51:25.114790904Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:25.121180 env[1566]: time="2025-03-17T18:51:25.121139856Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:25.124272 kubelet[2389]: W0317 18:51:25.124209 2389 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-a-c36c8d7be6&limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Mar 17 18:51:25.124272 kubelet[2389]: E0317 18:51:25.124280 2389 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-a-c36c8d7be6&limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Mar 17 18:51:25.127627 env[1566]: time="2025-03-17T18:51:25.127578604Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:25.137721 env[1566]: time="2025-03-17T18:51:25.137678962Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:25.150370 env[1566]: time="2025-03-17T18:51:25.150325949Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:25.180152 env[1566]: time="2025-03-17T18:51:25.180103291Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:25.219241 env[1566]: time="2025-03-17T18:51:25.219114516Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:51:25.219241 env[1566]: time="2025-03-17T18:51:25.219247630Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:51:25.219501 env[1566]: time="2025-03-17T18:51:25.219283468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:51:25.228045 env[1566]: time="2025-03-17T18:51:25.219756803Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/042ae5ccb76a22240f90493c38288b732811e7e2f7dbfc8209915dba90b52c9d pid=2429 runtime=io.containerd.runc.v2 Mar 17 18:51:25.233992 kubelet[2389]: I0317 18:51:25.233957 2389 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.7-a-c36c8d7be6" Mar 17 18:51:25.234342 kubelet[2389]: E0317 18:51:25.234304 2389 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.12:6443/api/v1/nodes\": dial tcp 10.200.20.12:6443: connect: connection refused" node="ci-3510.3.7-a-c36c8d7be6" Mar 17 18:51:25.245630 env[1566]: time="2025-03-17T18:51:25.245542632Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:51:25.245946 env[1566]: time="2025-03-17T18:51:25.245917452Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:51:25.246061 env[1566]: time="2025-03-17T18:51:25.246038886Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:51:25.246442 env[1566]: time="2025-03-17T18:51:25.246406907Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8c30c8fb880fe75fffc234609af51e547c2bda3a504f730ab06b01c24b46ffaa pid=2455 runtime=io.containerd.runc.v2 Mar 17 18:51:25.278939 env[1566]: time="2025-03-17T18:51:25.273698858Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:51:25.278939 env[1566]: time="2025-03-17T18:51:25.273738376Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:51:25.278939 env[1566]: time="2025-03-17T18:51:25.273748575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:51:25.278939 env[1566]: time="2025-03-17T18:51:25.273931766Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0209eb702b18888b4d9fe7c87709e81ce6ad21e6676fbd2032766a477c7e82e4 pid=2493 runtime=io.containerd.runc.v2 Mar 17 18:51:25.323755 env[1566]: time="2025-03-17T18:51:25.323624799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.7-a-c36c8d7be6,Uid:da937f9e89cf088a54c30093e8a24ca4,Namespace:kube-system,Attempt:0,} returns sandbox id \"042ae5ccb76a22240f90493c38288b732811e7e2f7dbfc8209915dba90b52c9d\"" Mar 17 18:51:25.335310 kubelet[2389]: W0317 18:51:25.335239 2389 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Mar 17 18:51:25.335506 kubelet[2389]: E0317 18:51:25.335489 2389 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Mar 17 18:51:25.337159 env[1566]: time="2025-03-17T18:51:25.337119062Z" level=info msg="CreateContainer within sandbox \"042ae5ccb76a22240f90493c38288b732811e7e2f7dbfc8209915dba90b52c9d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 17 18:51:25.345182 env[1566]: time="2025-03-17T18:51:25.345134449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.7-a-c36c8d7be6,Uid:aebb7b8c12bd9c158f1fdb95ca1392dc,Namespace:kube-system,Attempt:0,} returns sandbox id \"8c30c8fb880fe75fffc234609af51e547c2bda3a504f730ab06b01c24b46ffaa\"" Mar 17 18:51:25.348446 env[1566]: time="2025-03-17T18:51:25.348400680Z" level=info msg="CreateContainer within sandbox \"8c30c8fb880fe75fffc234609af51e547c2bda3a504f730ab06b01c24b46ffaa\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 17 18:51:25.363972 env[1566]: time="2025-03-17T18:51:25.363917679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.7-a-c36c8d7be6,Uid:28c4b68fbc4f0109758cdd6332ed9dea,Namespace:kube-system,Attempt:0,} returns sandbox id \"0209eb702b18888b4d9fe7c87709e81ce6ad21e6676fbd2032766a477c7e82e4\"" Mar 17 18:51:25.369045 env[1566]: time="2025-03-17T18:51:25.368996816Z" level=info msg="CreateContainer within sandbox \"0209eb702b18888b4d9fe7c87709e81ce6ad21e6676fbd2032766a477c7e82e4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 17 18:51:25.446244 env[1566]: time="2025-03-17T18:51:25.446189230Z" level=info msg="CreateContainer within sandbox \"8c30c8fb880fe75fffc234609af51e547c2bda3a504f730ab06b01c24b46ffaa\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a1d3ed6ccabdab9a59043974a38ba77b6da9c06dc8d22bb525a42c4fc5aa4c6f\"" Mar 17 18:51:25.447053 env[1566]: time="2025-03-17T18:51:25.447023667Z" level=info msg="StartContainer for \"a1d3ed6ccabdab9a59043974a38ba77b6da9c06dc8d22bb525a42c4fc5aa4c6f\"" Mar 17 18:51:25.455075 env[1566]: time="2025-03-17T18:51:25.455021694Z" level=info msg="CreateContainer within sandbox \"042ae5ccb76a22240f90493c38288b732811e7e2f7dbfc8209915dba90b52c9d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1ba2890e8daddb4702653b917ca6ea4c791b4bd60c260215bc6f8cf4c9115bf3\"" Mar 17 18:51:25.455703 env[1566]: time="2025-03-17T18:51:25.455667340Z" level=info msg="StartContainer for \"1ba2890e8daddb4702653b917ca6ea4c791b4bd60c260215bc6f8cf4c9115bf3\"" Mar 17 18:51:25.460807 env[1566]: time="2025-03-17T18:51:25.460760597Z" level=info msg="CreateContainer within sandbox \"0209eb702b18888b4d9fe7c87709e81ce6ad21e6676fbd2032766a477c7e82e4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5990f0e6592f25b06900e3446e4f085eea433a65ec8e73a9c99514f33c1f7c62\"" Mar 17 18:51:25.462074 env[1566]: time="2025-03-17T18:51:25.462041371Z" level=info msg="StartContainer for \"5990f0e6592f25b06900e3446e4f085eea433a65ec8e73a9c99514f33c1f7c62\"" Mar 17 18:51:25.548295 env[1566]: time="2025-03-17T18:51:25.548254119Z" level=info msg="StartContainer for \"a1d3ed6ccabdab9a59043974a38ba77b6da9c06dc8d22bb525a42c4fc5aa4c6f\" returns successfully" Mar 17 18:51:25.578342 env[1566]: time="2025-03-17T18:51:25.578205732Z" level=info msg="StartContainer for \"1ba2890e8daddb4702653b917ca6ea4c791b4bd60c260215bc6f8cf4c9115bf3\" returns successfully" Mar 17 18:51:25.580257 env[1566]: time="2025-03-17T18:51:25.580207349Z" level=info msg="StartContainer for \"5990f0e6592f25b06900e3446e4f085eea433a65ec8e73a9c99514f33c1f7c62\" returns successfully" Mar 17 18:51:26.836632 kubelet[2389]: I0317 18:51:26.836601 2389 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.7-a-c36c8d7be6" Mar 17 18:51:27.792979 kubelet[2389]: E0317 18:51:27.792926 2389 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.7-a-c36c8d7be6\" not found" node="ci-3510.3.7-a-c36c8d7be6" Mar 17 18:51:27.942434 kubelet[2389]: I0317 18:51:27.942383 2389 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510.3.7-a-c36c8d7be6" Mar 17 18:51:28.662082 kubelet[2389]: I0317 18:51:28.662041 2389 apiserver.go:52] "Watching apiserver" Mar 17 18:51:28.694991 kubelet[2389]: I0317 18:51:28.694953 2389 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 18:51:30.515823 systemd[1]: Reloading. Mar 17 18:51:30.583402 /usr/lib/systemd/system-generators/torcx-generator[2689]: time="2025-03-17T18:51:30Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:51:30.583436 /usr/lib/systemd/system-generators/torcx-generator[2689]: time="2025-03-17T18:51:30Z" level=info msg="torcx already run" Mar 17 18:51:30.680494 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:51:30.680667 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:51:30.696811 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:51:30.812739 kubelet[2389]: I0317 18:51:30.812630 2389 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 18:51:30.813549 systemd[1]: Stopping kubelet.service... Mar 17 18:51:30.829000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:30.829942 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 18:51:30.830269 systemd[1]: Stopped kubelet.service. Mar 17 18:51:30.834396 systemd[1]: Starting kubelet.service... Mar 17 18:51:30.836021 kernel: kauditd_printk_skb: 46 callbacks suppressed Mar 17 18:51:30.836133 kernel: audit: type=1131 audit(1742237490.829:236): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:31.037521 systemd[1]: Started kubelet.service. Mar 17 18:51:31.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:31.106720 kubelet[2763]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:51:31.107064 kubelet[2763]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 18:51:31.107114 kubelet[2763]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:51:31.107259 kubelet[2763]: I0317 18:51:31.107234 2763 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 18:51:31.112208 kubelet[2763]: I0317 18:51:31.112183 2763 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 18:51:31.112391 kubelet[2763]: I0317 18:51:31.112380 2763 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 18:51:31.113136 kubelet[2763]: I0317 18:51:31.113120 2763 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 18:51:31.116940 kubelet[2763]: I0317 18:51:31.116917 2763 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 17 18:51:31.121089 kubelet[2763]: I0317 18:51:31.121067 2763 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 18:51:31.356451 kernel: audit: type=1130 audit(1742237491.038:237): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:31.359803 kubelet[2763]: I0317 18:51:31.358164 2763 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 18:51:31.359803 kubelet[2763]: I0317 18:51:31.358675 2763 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 18:51:31.359803 kubelet[2763]: I0317 18:51:31.358706 2763 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.7-a-c36c8d7be6","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 18:51:31.359803 kubelet[2763]: I0317 18:51:31.358983 2763 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 18:51:31.360066 kubelet[2763]: I0317 18:51:31.358992 2763 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 18:51:31.360066 kubelet[2763]: I0317 18:51:31.359032 2763 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:51:31.360066 kubelet[2763]: I0317 18:51:31.359136 2763 kubelet.go:400] "Attempting to sync node with API server" Mar 17 18:51:31.360066 kubelet[2763]: I0317 18:51:31.359150 2763 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 18:51:31.360066 kubelet[2763]: I0317 18:51:31.359180 2763 kubelet.go:312] "Adding apiserver pod source" Mar 17 18:51:31.360066 kubelet[2763]: I0317 18:51:31.359197 2763 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 18:51:31.361536 kubelet[2763]: I0317 18:51:31.361500 2763 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Mar 17 18:51:31.368249 kubelet[2763]: I0317 18:51:31.368223 2763 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 18:51:31.369647 kubelet[2763]: I0317 18:51:31.369618 2763 server.go:1264] "Started kubelet" Mar 17 18:51:31.373301 kubelet[2763]: E0317 18:51:31.373265 2763 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 18:51:31.379266 kubelet[2763]: I0317 18:51:31.373576 2763 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 18:51:31.379266 kubelet[2763]: I0317 18:51:31.378275 2763 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 18:51:31.379266 kubelet[2763]: I0317 18:51:31.378337 2763 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 18:51:31.379470 kubelet[2763]: I0317 18:51:31.379384 2763 server.go:455] "Adding debug handlers to kubelet server" Mar 17 18:51:31.379000 audit[2763]: AVC avc: denied { mac_admin } for pid=2763 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:51:31.383071 kubelet[2763]: I0317 18:51:31.383034 2763 kubelet.go:1419] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Mar 17 18:51:31.383241 kubelet[2763]: I0317 18:51:31.383226 2763 kubelet.go:1423] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Mar 17 18:51:31.383329 kubelet[2763]: I0317 18:51:31.383320 2763 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 18:51:31.398789 kubelet[2763]: I0317 18:51:31.398761 2763 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 18:51:31.408941 kubelet[2763]: I0317 18:51:31.401255 2763 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 18:51:31.425338 kernel: audit: type=1400 audit(1742237491.379:238): avc: denied { mac_admin } for pid=2763 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:51:31.425553 kernel: audit: type=1401 audit(1742237491.379:238): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Mar 17 18:51:31.379000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Mar 17 18:51:31.429609 kubelet[2763]: I0317 18:51:31.429558 2763 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 18:51:31.465816 kernel: audit: type=1300 audit(1742237491.379:238): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000be8630 a1=4000bab9b0 a2=4000be8600 a3=25 items=0 ppid=1 pid=2763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:31.379000 audit[2763]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000be8630 a1=4000bab9b0 a2=4000be8600 a3=25 items=0 ppid=1 pid=2763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:31.468488 kubelet[2763]: I0317 18:51:31.468464 2763 reconciler.go:26] "Reconciler: start to sync state" Mar 17 18:51:31.475518 kubelet[2763]: I0317 18:51:31.474927 2763 factory.go:221] Registration of the containerd container factory successfully Mar 17 18:51:31.475518 kubelet[2763]: I0317 18:51:31.475136 2763 factory.go:221] Registration of the systemd container factory successfully Mar 17 18:51:31.475695 kubelet[2763]: I0317 18:51:31.475634 2763 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 18:51:31.379000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Mar 17 18:51:31.382000 audit[2763]: AVC avc: denied { mac_admin } for pid=2763 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:51:31.511955 kernel: audit: type=1327 audit(1742237491.379:238): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Mar 17 18:51:31.515091 kubelet[2763]: I0317 18:51:31.515058 2763 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 18:51:31.515329 kubelet[2763]: I0317 18:51:31.515316 2763 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 18:51:31.527262 kubelet[2763]: I0317 18:51:31.527233 2763 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 18:51:31.530689 kubelet[2763]: E0317 18:51:31.530654 2763 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 18:51:31.531608 kubelet[2763]: I0317 18:51:31.524165 2763 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.7-a-c36c8d7be6" Mar 17 18:51:31.382000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Mar 17 18:51:31.557844 kernel: audit: type=1400 audit(1742237491.382:239): avc: denied { mac_admin } for pid=2763 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:51:31.559499 kernel: audit: type=1401 audit(1742237491.382:239): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Mar 17 18:51:31.559565 kernel: audit: type=1300 audit(1742237491.382:239): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000bafa00 a1=4000bab9c8 a2=4000be86c0 a3=25 items=0 ppid=1 pid=2763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:31.382000 audit[2763]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000bafa00 a1=4000bab9c8 a2=4000be86c0 a3=25 items=0 ppid=1 pid=2763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:31.567881 kubelet[2763]: I0317 18:51:31.567831 2763 kubelet_node_status.go:112] "Node was previously registered" node="ci-3510.3.7-a-c36c8d7be6" Mar 17 18:51:31.568164 kubelet[2763]: I0317 18:51:31.568152 2763 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510.3.7-a-c36c8d7be6" Mar 17 18:51:31.382000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Mar 17 18:51:31.619193 kernel: audit: type=1327 audit(1742237491.382:239): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Mar 17 18:51:31.635083 kubelet[2763]: E0317 18:51:31.635041 2763 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 18:51:31.661660 kubelet[2763]: I0317 18:51:31.661622 2763 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 18:51:31.661660 kubelet[2763]: I0317 18:51:31.661644 2763 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 18:51:31.661660 kubelet[2763]: I0317 18:51:31.661667 2763 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:51:31.661858 kubelet[2763]: I0317 18:51:31.661822 2763 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 17 18:51:31.661858 kubelet[2763]: I0317 18:51:31.661834 2763 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 17 18:51:31.661858 kubelet[2763]: I0317 18:51:31.661855 2763 policy_none.go:49] "None policy: Start" Mar 17 18:51:31.662633 kubelet[2763]: I0317 18:51:31.662606 2763 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 18:51:31.662633 kubelet[2763]: I0317 18:51:31.662638 2763 state_mem.go:35] "Initializing new in-memory state store" Mar 17 18:51:31.663173 kubelet[2763]: I0317 18:51:31.662829 2763 state_mem.go:75] "Updated machine memory state" Mar 17 18:51:31.664094 kubelet[2763]: I0317 18:51:31.664063 2763 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 18:51:31.663000 audit[2763]: AVC avc: denied { mac_admin } for pid=2763 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:51:31.663000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Mar 17 18:51:31.663000 audit[2763]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=40010fb620 a1=4001146030 a2=40010fb5f0 a3=25 items=0 ppid=1 pid=2763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:31.663000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Mar 17 18:51:31.664377 kubelet[2763]: I0317 18:51:31.664157 2763 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Mar 17 18:51:31.664410 kubelet[2763]: I0317 18:51:31.664334 2763 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 18:51:31.666229 kubelet[2763]: I0317 18:51:31.666201 2763 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 18:51:31.835695 kubelet[2763]: I0317 18:51:31.835621 2763 topology_manager.go:215] "Topology Admit Handler" podUID="da937f9e89cf088a54c30093e8a24ca4" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.7-a-c36c8d7be6" Mar 17 18:51:31.835860 kubelet[2763]: I0317 18:51:31.835821 2763 topology_manager.go:215] "Topology Admit Handler" podUID="aebb7b8c12bd9c158f1fdb95ca1392dc" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.7-a-c36c8d7be6" Mar 17 18:51:31.836248 kubelet[2763]: I0317 18:51:31.836220 2763 topology_manager.go:215] "Topology Admit Handler" podUID="28c4b68fbc4f0109758cdd6332ed9dea" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.7-a-c36c8d7be6" Mar 17 18:51:31.848744 kubelet[2763]: W0317 18:51:31.848710 2763 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 18:51:31.852420 kubelet[2763]: W0317 18:51:31.852370 2763 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 18:51:31.852420 kubelet[2763]: W0317 18:51:31.852392 2763 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 18:51:31.982734 kubelet[2763]: I0317 18:51:31.982699 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/da937f9e89cf088a54c30093e8a24ca4-k8s-certs\") pod \"kube-apiserver-ci-3510.3.7-a-c36c8d7be6\" (UID: \"da937f9e89cf088a54c30093e8a24ca4\") " pod="kube-system/kube-apiserver-ci-3510.3.7-a-c36c8d7be6" Mar 17 18:51:31.982989 kubelet[2763]: I0317 18:51:31.982970 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/da937f9e89cf088a54c30093e8a24ca4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.7-a-c36c8d7be6\" (UID: \"da937f9e89cf088a54c30093e8a24ca4\") " pod="kube-system/kube-apiserver-ci-3510.3.7-a-c36c8d7be6" Mar 17 18:51:31.983090 kubelet[2763]: I0317 18:51:31.983072 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aebb7b8c12bd9c158f1fdb95ca1392dc-ca-certs\") pod \"kube-controller-manager-ci-3510.3.7-a-c36c8d7be6\" (UID: \"aebb7b8c12bd9c158f1fdb95ca1392dc\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-a-c36c8d7be6" Mar 17 18:51:31.983227 kubelet[2763]: I0317 18:51:31.983185 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/aebb7b8c12bd9c158f1fdb95ca1392dc-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.7-a-c36c8d7be6\" (UID: \"aebb7b8c12bd9c158f1fdb95ca1392dc\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-a-c36c8d7be6" Mar 17 18:51:31.983321 kubelet[2763]: I0317 18:51:31.983308 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aebb7b8c12bd9c158f1fdb95ca1392dc-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.7-a-c36c8d7be6\" (UID: \"aebb7b8c12bd9c158f1fdb95ca1392dc\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-a-c36c8d7be6" Mar 17 18:51:31.983399 kubelet[2763]: I0317 18:51:31.983388 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/da937f9e89cf088a54c30093e8a24ca4-ca-certs\") pod \"kube-apiserver-ci-3510.3.7-a-c36c8d7be6\" (UID: \"da937f9e89cf088a54c30093e8a24ca4\") " pod="kube-system/kube-apiserver-ci-3510.3.7-a-c36c8d7be6" Mar 17 18:51:31.983478 kubelet[2763]: I0317 18:51:31.983468 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aebb7b8c12bd9c158f1fdb95ca1392dc-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.7-a-c36c8d7be6\" (UID: \"aebb7b8c12bd9c158f1fdb95ca1392dc\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-a-c36c8d7be6" Mar 17 18:51:31.983560 kubelet[2763]: I0317 18:51:31.983549 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/aebb7b8c12bd9c158f1fdb95ca1392dc-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.7-a-c36c8d7be6\" (UID: \"aebb7b8c12bd9c158f1fdb95ca1392dc\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-a-c36c8d7be6" Mar 17 18:51:31.983663 kubelet[2763]: I0317 18:51:31.983651 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/28c4b68fbc4f0109758cdd6332ed9dea-kubeconfig\") pod \"kube-scheduler-ci-3510.3.7-a-c36c8d7be6\" (UID: \"28c4b68fbc4f0109758cdd6332ed9dea\") " pod="kube-system/kube-scheduler-ci-3510.3.7-a-c36c8d7be6" Mar 17 18:51:32.361479 kubelet[2763]: I0317 18:51:32.361343 2763 apiserver.go:52] "Watching apiserver" Mar 17 18:51:32.420932 kubelet[2763]: I0317 18:51:32.419644 2763 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 18:51:32.564911 kubelet[2763]: W0317 18:51:32.561363 2763 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 18:51:32.564911 kubelet[2763]: E0317 18:51:32.561454 2763 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.7-a-c36c8d7be6\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.7-a-c36c8d7be6" Mar 17 18:51:32.593085 kubelet[2763]: I0317 18:51:32.592926 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.7-a-c36c8d7be6" podStartSLOduration=1.592905636 podStartE2EDuration="1.592905636s" podCreationTimestamp="2025-03-17 18:51:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:51:32.569612819 +0000 UTC m=+1.517816935" watchObservedRunningTime="2025-03-17 18:51:32.592905636 +0000 UTC m=+1.541109712" Mar 17 18:51:32.620961 kubelet[2763]: I0317 18:51:32.620709 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.7-a-c36c8d7be6" podStartSLOduration=1.620690575 podStartE2EDuration="1.620690575s" podCreationTimestamp="2025-03-17 18:51:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:51:32.620376189 +0000 UTC m=+1.568580305" watchObservedRunningTime="2025-03-17 18:51:32.620690575 +0000 UTC m=+1.568894731" Mar 17 18:51:32.621144 kubelet[2763]: I0317 18:51:32.620997 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.7-a-c36c8d7be6" podStartSLOduration=1.620990762 podStartE2EDuration="1.620990762s" podCreationTimestamp="2025-03-17 18:51:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:51:32.592769522 +0000 UTC m=+1.540973638" watchObservedRunningTime="2025-03-17 18:51:32.620990762 +0000 UTC m=+1.569194878" Mar 17 18:51:36.284456 sudo[2014]: pam_unix(sudo:session): session closed for user root Mar 17 18:51:36.283000 audit[2014]: USER_END pid=2014 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Mar 17 18:51:36.289734 kernel: kauditd_printk_skb: 4 callbacks suppressed Mar 17 18:51:36.289844 kernel: audit: type=1106 audit(1742237496.283:241): pid=2014 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Mar 17 18:51:36.288000 audit[2014]: CRED_DISP pid=2014 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Mar 17 18:51:36.334951 kernel: audit: type=1104 audit(1742237496.288:242): pid=2014 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Mar 17 18:51:36.400967 sshd[2010]: pam_unix(sshd:session): session closed for user core Mar 17 18:51:36.401000 audit[2010]: USER_END pid=2010 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:51:36.401000 audit[2010]: CRED_DISP pid=2010 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:51:36.430558 systemd[1]: sshd@6-10.200.20.12:22-10.200.16.10:37968.service: Deactivated successfully. Mar 17 18:51:36.430902 kernel: audit: type=1106 audit(1742237496.401:243): pid=2010 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:51:36.455187 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 18:51:36.455648 systemd-logind[1549]: Session 9 logged out. Waiting for processes to exit. Mar 17 18:51:36.456474 systemd-logind[1549]: Removed session 9. Mar 17 18:51:36.429000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.12:22-10.200.16.10:37968 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:36.479001 kernel: audit: type=1104 audit(1742237496.401:244): pid=2010 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:51:36.479101 kernel: audit: type=1131 audit(1742237496.429:245): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.12:22-10.200.16.10:37968 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:51:44.066535 kubelet[2763]: I0317 18:51:44.066498 2763 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 17 18:51:44.067041 env[1566]: time="2025-03-17T18:51:44.066837852Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 18:51:44.067266 kubelet[2763]: I0317 18:51:44.067043 2763 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 17 18:51:44.660136 kubelet[2763]: I0317 18:51:44.660092 2763 topology_manager.go:215] "Topology Admit Handler" podUID="ddca6c64-d24b-4642-8b49-5c686bb1e8cf" podNamespace="kube-system" podName="kube-proxy-98cgr" Mar 17 18:51:44.761884 kubelet[2763]: I0317 18:51:44.761821 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ddca6c64-d24b-4642-8b49-5c686bb1e8cf-xtables-lock\") pod \"kube-proxy-98cgr\" (UID: \"ddca6c64-d24b-4642-8b49-5c686bb1e8cf\") " pod="kube-system/kube-proxy-98cgr" Mar 17 18:51:44.762047 kubelet[2763]: I0317 18:51:44.761924 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ddca6c64-d24b-4642-8b49-5c686bb1e8cf-lib-modules\") pod \"kube-proxy-98cgr\" (UID: \"ddca6c64-d24b-4642-8b49-5c686bb1e8cf\") " pod="kube-system/kube-proxy-98cgr" Mar 17 18:51:44.762047 kubelet[2763]: I0317 18:51:44.761969 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgx4j\" (UniqueName: \"kubernetes.io/projected/ddca6c64-d24b-4642-8b49-5c686bb1e8cf-kube-api-access-pgx4j\") pod \"kube-proxy-98cgr\" (UID: \"ddca6c64-d24b-4642-8b49-5c686bb1e8cf\") " pod="kube-system/kube-proxy-98cgr" Mar 17 18:51:44.762047 kubelet[2763]: I0317 18:51:44.761997 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ddca6c64-d24b-4642-8b49-5c686bb1e8cf-kube-proxy\") pod \"kube-proxy-98cgr\" (UID: \"ddca6c64-d24b-4642-8b49-5c686bb1e8cf\") " pod="kube-system/kube-proxy-98cgr" Mar 17 18:51:44.964492 env[1566]: time="2025-03-17T18:51:44.964102204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-98cgr,Uid:ddca6c64-d24b-4642-8b49-5c686bb1e8cf,Namespace:kube-system,Attempt:0,}" Mar 17 18:51:45.015917 env[1566]: time="2025-03-17T18:51:45.013953672Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:51:45.015917 env[1566]: time="2025-03-17T18:51:45.013995631Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:51:45.015917 env[1566]: time="2025-03-17T18:51:45.014006430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:51:45.015917 env[1566]: time="2025-03-17T18:51:45.014133426Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/52677104375fba457a44c2b460e1cb8b26a32b63c0f656611b8f22fd5cc6def2 pid=2848 runtime=io.containerd.runc.v2 Mar 17 18:51:45.068682 env[1566]: time="2025-03-17T18:51:45.068638883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-98cgr,Uid:ddca6c64-d24b-4642-8b49-5c686bb1e8cf,Namespace:kube-system,Attempt:0,} returns sandbox id \"52677104375fba457a44c2b460e1cb8b26a32b63c0f656611b8f22fd5cc6def2\"" Mar 17 18:51:45.072639 env[1566]: time="2025-03-17T18:51:45.072588631Z" level=info msg="CreateContainer within sandbox \"52677104375fba457a44c2b460e1cb8b26a32b63c0f656611b8f22fd5cc6def2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 18:51:45.140617 env[1566]: time="2025-03-17T18:51:45.140519278Z" level=info msg="CreateContainer within sandbox \"52677104375fba457a44c2b460e1cb8b26a32b63c0f656611b8f22fd5cc6def2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7d1e3cf8aa93439779aef516b2257016c2e65de44f5e5e4cb5961cbd7c72de3e\"" Mar 17 18:51:45.141843 env[1566]: time="2025-03-17T18:51:45.141810675Z" level=info msg="StartContainer for \"7d1e3cf8aa93439779aef516b2257016c2e65de44f5e5e4cb5961cbd7c72de3e\"" Mar 17 18:51:45.187684 kubelet[2763]: I0317 18:51:45.186607 2763 topology_manager.go:215] "Topology Admit Handler" podUID="63dbfb5a-97f2-4537-b93e-dea175a544d3" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-7btxp" Mar 17 18:51:45.244341 env[1566]: time="2025-03-17T18:51:45.244202650Z" level=info msg="StartContainer for \"7d1e3cf8aa93439779aef516b2257016c2e65de44f5e5e4cb5961cbd7c72de3e\" returns successfully" Mar 17 18:51:45.265530 kubelet[2763]: I0317 18:51:45.265472 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/63dbfb5a-97f2-4537-b93e-dea175a544d3-var-lib-calico\") pod \"tigera-operator-7bc55997bb-7btxp\" (UID: \"63dbfb5a-97f2-4537-b93e-dea175a544d3\") " pod="tigera-operator/tigera-operator-7bc55997bb-7btxp" Mar 17 18:51:45.265818 kubelet[2763]: I0317 18:51:45.265784 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zghjl\" (UniqueName: \"kubernetes.io/projected/63dbfb5a-97f2-4537-b93e-dea175a544d3-kube-api-access-zghjl\") pod \"tigera-operator-7bc55997bb-7btxp\" (UID: \"63dbfb5a-97f2-4537-b93e-dea175a544d3\") " pod="tigera-operator/tigera-operator-7bc55997bb-7btxp" Mar 17 18:51:45.310000 audit[2939]: NETFILTER_CFG table=mangle:41 family=2 entries=1 op=nft_register_chain pid=2939 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:51:45.310000 audit[2939]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe9da55a0 a2=0 a3=1 items=0 ppid=2898 pid=2939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:45.365787 kernel: audit: type=1325 audit(1742237505.310:246): table=mangle:41 family=2 entries=1 op=nft_register_chain pid=2939 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:51:45.365962 kernel: audit: type=1300 audit(1742237505.310:246): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe9da55a0 a2=0 a3=1 items=0 ppid=2898 pid=2939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:45.310000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Mar 17 18:51:45.384120 kernel: audit: type=1327 audit(1742237505.310:246): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Mar 17 18:51:45.314000 audit[2940]: NETFILTER_CFG table=mangle:42 family=10 entries=1 op=nft_register_chain pid=2940 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:51:45.401238 kernel: audit: type=1325 audit(1742237505.314:247): table=mangle:42 family=10 entries=1 op=nft_register_chain pid=2940 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:51:45.401395 kernel: audit: type=1300 audit(1742237505.314:247): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe1023c50 a2=0 a3=1 items=0 ppid=2898 pid=2940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:45.314000 audit[2940]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe1023c50 a2=0 a3=1 items=0 ppid=2898 pid=2940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:45.314000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Mar 17 18:51:45.452371 kernel: audit: type=1327 audit(1742237505.314:247): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Mar 17 18:51:45.452524 kernel: audit: type=1325 audit(1742237505.317:248): table=nat:43 family=10 entries=1 op=nft_register_chain pid=2941 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:51:45.317000 audit[2941]: NETFILTER_CFG table=nat:43 family=10 entries=1 op=nft_register_chain pid=2941 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:51:45.317000 audit[2941]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffce341510 a2=0 a3=1 items=0 ppid=2898 pid=2941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:45.491188 env[1566]: time="2025-03-17T18:51:45.490779202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-7btxp,Uid:63dbfb5a-97f2-4537-b93e-dea175a544d3,Namespace:tigera-operator,Attempt:0,}" Mar 17 18:51:45.498671 kernel: audit: type=1300 audit(1742237505.317:248): arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffce341510 a2=0 a3=1 items=0 ppid=2898 pid=2941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:45.498853 kernel: audit: type=1327 audit(1742237505.317:248): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Mar 17 18:51:45.317000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Mar 17 18:51:45.318000 audit[2942]: NETFILTER_CFG table=filter:44 family=10 entries=1 op=nft_register_chain pid=2942 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:51:45.532123 kernel: audit: type=1325 audit(1742237505.318:249): table=filter:44 family=10 entries=1 op=nft_register_chain pid=2942 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:51:45.318000 audit[2942]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffea15e630 a2=0 a3=1 items=0 ppid=2898 pid=2942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:45.318000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Mar 17 18:51:45.333000 audit[2943]: NETFILTER_CFG table=nat:45 family=2 entries=1 op=nft_register_chain pid=2943 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:51:45.333000 audit[2943]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc6a8c020 a2=0 a3=1 items=0 ppid=2898 pid=2943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:45.333000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Mar 17 18:51:45.367000 audit[2944]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=2944 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:51:45.367000 audit[2944]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe0826590 a2=0 a3=1 items=0 ppid=2898 pid=2944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:45.367000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Mar 17 18:51:45.421000 audit[2946]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2946 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:51:45.421000 audit[2946]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffebe88c50 a2=0 a3=1 items=0 ppid=2898 pid=2946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:45.421000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Mar 17 18:51:45.425000 audit[2948]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2948 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:51:45.425000 audit[2948]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffffba77460 a2=0 a3=1 items=0 ppid=2898 pid=2948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:45.425000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Mar 17 18:51:45.430000 audit[2951]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_rule pid=2951 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:51:45.430000 audit[2951]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffc51cf640 a2=0 a3=1 items=0 ppid=2898 pid=2951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:45.430000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Mar 17 18:51:45.432000 audit[2952]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_chain pid=2952 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:51:45.432000 audit[2952]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe93afb70 a2=0 a3=1 items=0 ppid=2898 pid=2952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:45.432000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Mar 17 18:51:45.437000 audit[2954]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2954 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:51:45.437000 audit[2954]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe8c9ad70 a2=0 a3=1 items=0 ppid=2898 pid=2954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:45.437000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Mar 17 18:51:45.438000 audit[2955]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2955 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:51:45.438000 audit[2955]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc4f85dc0 a2=0 a3=1 items=0 ppid=2898 pid=2955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:45.438000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Mar 17 18:51:45.442000 audit[2957]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2957 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:51:45.442000 audit[2957]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffe48c8790 a2=0 a3=1 items=0 ppid=2898 pid=2957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:45.442000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Mar 17 18:51:45.447000 audit[2960]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_rule pid=2960 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:51:45.447000 audit[2960]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffffdac5a30 a2=0 a3=1 items=0 ppid=2898 pid=2960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:45.447000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Mar 17 18:51:45.448000 audit[2961]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_chain pid=2961 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:51:45.448000 audit[2961]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd5cae8b0 a2=0 a3=1 items=0 ppid=2898 pid=2961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:45.448000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Mar 17 18:51:45.452000 audit[2963]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2963 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:51:45.452000 audit[2963]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffca188fe0 a2=0 a3=1 items=0 ppid=2898 pid=2963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:45.452000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Mar 17 18:51:45.453000 audit[2964]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_chain pid=2964 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:51:45.453000 audit[2964]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe81d29a0 a2=0 a3=1 items=0 ppid=2898 pid=2964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:45.453000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Mar 17 18:51:45.456000 audit[2966]: NETFILTER_CFG table=filter:58 family=2 entries=1 op=nft_register_rule pid=2966 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:51:45.456000 audit[2966]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe3a5d6a0 a2=0 a3=1 items=0 ppid=2898 pid=2966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:45.456000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Mar 17 18:51:45.461000 audit[2969]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_rule pid=2969 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:51:45.461000 audit[2969]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffffb13fb60 a2=0 a3=1 items=0 ppid=2898 pid=2969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:45.461000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Mar 17 18:51:45.539000 audit[2972]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_rule pid=2972 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:51:45.539000 audit[2972]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc583a2f0 a2=0 a3=1 items=0 ppid=2898 pid=2972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:45.539000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Mar 17 18:51:45.541000 audit[2973]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2973 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:51:45.541000 audit[2973]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd8bfe2c0 a2=0 a3=1 items=0 ppid=2898 pid=2973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:45.541000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Mar 17 18:51:45.545000 audit[2975]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2975 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:51:45.545000 audit[2975]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffc65294f0 a2=0 a3=1 items=0 ppid=2898 pid=2975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:45.545000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Mar 17 18:51:45.548000 audit[2978]: NETFILTER_CFG table=nat:63 family=2 entries=1 op=nft_register_rule pid=2978 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:51:45.548000 audit[2978]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc2080540 a2=0 a3=1 items=0 ppid=2898 pid=2978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:45.548000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Mar 17 18:51:45.550000 audit[2979]: NETFILTER_CFG table=nat:64 family=2 entries=1 op=nft_register_chain pid=2979 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:51:45.550000 audit[2979]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcc8ddbf0 a2=0 a3=1 items=0 ppid=2898 pid=2979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:45.550000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Mar 17 18:51:45.552000 audit[2981]: NETFILTER_CFG table=nat:65 family=2 entries=1 op=nft_register_rule pid=2981 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:51:45.552000 audit[2981]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffeb595340 a2=0 a3=1 items=0 ppid=2898 pid=2981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:45.552000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Mar 17 18:51:45.581453 env[1566]: time="2025-03-17T18:51:45.581359012Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:51:45.581453 env[1566]: time="2025-03-17T18:51:45.581442929Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:51:45.581682 env[1566]: time="2025-03-17T18:51:45.581469608Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:51:45.581682 env[1566]: time="2025-03-17T18:51:45.581621523Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5f012945d420ed7893186b48f7477b88f6df97f07c848ab7d3e400d047c73f3b pid=2995 runtime=io.containerd.runc.v2 Mar 17 18:51:45.626000 audit[2987]: NETFILTER_CFG table=filter:66 family=2 entries=8 op=nft_register_rule pid=2987 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:51:45.626000 audit[2987]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5164 a0=3 a1=ffffe9abb950 a2=0 a3=1 items=0 ppid=2898 pid=2987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:45.626000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:51:45.628111 env[1566]: time="2025-03-17T18:51:45.627593905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-7btxp,Uid:63dbfb5a-97f2-4537-b93e-dea175a544d3,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"5f012945d420ed7893186b48f7477b88f6df97f07c848ab7d3e400d047c73f3b\"" Mar 17 18:51:45.633088 env[1566]: time="2025-03-17T18:51:45.633044123Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Mar 17 18:51:45.636000 audit[2987]: NETFILTER_CFG table=nat:67 family=2 entries=14 op=nft_register_chain pid=2987 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:51:45.636000 audit[2987]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=ffffe9abb950 a2=0 a3=1 items=0 ppid=2898 pid=2987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:45.636000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:51:45.639000 audit[3032]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=3032 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:51:45.639000 audit[3032]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffc65c06d0 a2=0 a3=1 items=0 ppid=2898 pid=3032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:45.639000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Mar 17 18:51:45.649000 audit[3034]: NETFILTER_CFG table=filter:69 family=10 entries=2 op=nft_register_chain pid=3034 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:51:45.649000 audit[3034]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffc744c610 a2=0 a3=1 items=0 ppid=2898 pid=3034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:45.649000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Mar 17 18:51:45.653000 audit[3038]: NETFILTER_CFG table=filter:70 family=10 entries=2 op=nft_register_chain pid=3038 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:51:45.653000 audit[3038]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffcf06cfd0 a2=0 a3=1 items=0 ppid=2898 pid=3038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:45.653000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Mar 17 18:51:45.654000 audit[3039]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_chain pid=3039 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:51:45.654000 audit[3039]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffcf38e20 a2=0 a3=1 items=0 ppid=2898 pid=3039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:45.654000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Mar 17 18:51:45.657000 audit[3041]: NETFILTER_CFG table=filter:72 family=10 entries=1 op=nft_register_rule pid=3041 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:51:45.657000 audit[3041]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd7bce430 a2=0 a3=1 items=0 ppid=2898 pid=3041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:45.657000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Mar 17 18:51:45.659000 audit[3042]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=3042 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:51:45.659000 audit[3042]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe72806d0 a2=0 a3=1 items=0 ppid=2898 pid=3042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:45.659000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Mar 17 18:51:45.662000 audit[3044]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=3044 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:51:45.662000 audit[3044]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffe5817bb0 a2=0 a3=1 items=0 ppid=2898 pid=3044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:45.662000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Mar 17 18:51:45.666000 audit[3047]: NETFILTER_CFG table=filter:75 family=10 entries=2 op=nft_register_chain pid=3047 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:51:45.666000 audit[3047]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffe6dbac90 a2=0 a3=1 items=0 ppid=2898 pid=3047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:45.666000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Mar 17 18:51:45.667000 audit[3048]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_chain pid=3048 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:51:45.667000 audit[3048]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdc04fb80 a2=0 a3=1 items=0 ppid=2898 pid=3048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:45.667000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Mar 17 18:51:45.670000 audit[3050]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=3050 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:51:45.670000 audit[3050]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc90fb680 a2=0 a3=1 items=0 ppid=2898 pid=3050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:45.670000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Mar 17 18:51:45.671000 audit[3051]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_chain pid=3051 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:51:45.671000 audit[3051]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd92fb5e0 a2=0 a3=1 items=0 ppid=2898 pid=3051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:45.671000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Mar 17 18:51:45.674000 audit[3053]: NETFILTER_CFG table=filter:79 family=10 entries=1 op=nft_register_rule pid=3053 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:51:45.674000 audit[3053]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffdb698300 a2=0 a3=1 items=0 ppid=2898 pid=3053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:45.674000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Mar 17 18:51:45.677000 audit[3056]: NETFILTER_CFG table=filter:80 family=10 entries=1 op=nft_register_rule pid=3056 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:51:45.677000 audit[3056]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe9870120 a2=0 a3=1 items=0 ppid=2898 pid=3056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:45.677000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Mar 17 18:51:45.681000 audit[3059]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_rule pid=3059 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:51:45.681000 audit[3059]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff7bcb2d0 a2=0 a3=1 items=0 ppid=2898 pid=3059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:45.681000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Mar 17 18:51:45.682000 audit[3060]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=3060 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:51:45.682000 audit[3060]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc5bdd250 a2=0 a3=1 items=0 ppid=2898 pid=3060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:45.682000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Mar 17 18:51:45.685000 audit[3062]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=3062 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:51:45.685000 audit[3062]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffc0345e10 a2=0 a3=1 items=0 ppid=2898 pid=3062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:45.685000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Mar 17 18:51:45.688000 audit[3065]: NETFILTER_CFG table=nat:84 family=10 entries=2 op=nft_register_chain pid=3065 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:51:45.688000 audit[3065]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=ffffe9ce5e90 a2=0 a3=1 items=0 ppid=2898 pid=3065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:45.688000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Mar 17 18:51:45.689000 audit[3066]: NETFILTER_CFG table=nat:85 family=10 entries=1 op=nft_register_chain pid=3066 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:51:45.689000 audit[3066]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd8637bf0 a2=0 a3=1 items=0 ppid=2898 pid=3066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:45.689000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Mar 17 18:51:45.691000 audit[3068]: NETFILTER_CFG table=nat:86 family=10 entries=2 op=nft_register_chain pid=3068 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:51:45.691000 audit[3068]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffc3ee9800 a2=0 a3=1 items=0 ppid=2898 pid=3068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:45.691000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Mar 17 18:51:45.693000 audit[3069]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_chain pid=3069 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:51:45.693000 audit[3069]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe6dc2fb0 a2=0 a3=1 items=0 ppid=2898 pid=3069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:45.693000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Mar 17 18:51:45.695000 audit[3071]: NETFILTER_CFG table=filter:88 family=10 entries=1 op=nft_register_rule pid=3071 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:51:45.695000 audit[3071]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffef4c5a00 a2=0 a3=1 items=0 ppid=2898 pid=3071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:45.695000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Mar 17 18:51:45.698000 audit[3074]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_rule pid=3074 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:51:45.698000 audit[3074]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffc2d24970 a2=0 a3=1 items=0 ppid=2898 pid=3074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:45.698000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Mar 17 18:51:45.701000 audit[3076]: NETFILTER_CFG table=filter:90 family=10 entries=3 op=nft_register_rule pid=3076 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Mar 17 18:51:45.701000 audit[3076]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2004 a0=3 a1=ffffc8c820c0 a2=0 a3=1 items=0 ppid=2898 pid=3076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:45.701000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:51:45.702000 audit[3076]: NETFILTER_CFG table=nat:91 family=10 entries=7 op=nft_register_chain pid=3076 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Mar 17 18:51:45.702000 audit[3076]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffc8c820c0 a2=0 a3=1 items=0 ppid=2898 pid=3076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:45.702000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:51:48.006690 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3994449022.mount: Deactivated successfully. Mar 17 18:51:49.059913 env[1566]: time="2025-03-17T18:51:49.059856723Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.36.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:49.069635 env[1566]: time="2025-03-17T18:51:49.069590301Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:49.079391 env[1566]: time="2025-03-17T18:51:49.079355278Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.36.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:49.086132 env[1566]: time="2025-03-17T18:51:49.085142339Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:49.086132 env[1566]: time="2025-03-17T18:51:49.085440329Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Mar 17 18:51:49.089378 env[1566]: time="2025-03-17T18:51:49.089340408Z" level=info msg="CreateContainer within sandbox \"5f012945d420ed7893186b48f7477b88f6df97f07c848ab7d3e400d047c73f3b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 17 18:51:49.130448 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2027934883.mount: Deactivated successfully. Mar 17 18:51:49.152491 env[1566]: time="2025-03-17T18:51:49.152437892Z" level=info msg="CreateContainer within sandbox \"5f012945d420ed7893186b48f7477b88f6df97f07c848ab7d3e400d047c73f3b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"5bc711b525444ec33f9d6e2295fd5473cbe685d01c92bb67387e09613f59f146\"" Mar 17 18:51:49.154310 env[1566]: time="2025-03-17T18:51:49.153910446Z" level=info msg="StartContainer for \"5bc711b525444ec33f9d6e2295fd5473cbe685d01c92bb67387e09613f59f146\"" Mar 17 18:51:49.207207 env[1566]: time="2025-03-17T18:51:49.206503376Z" level=info msg="StartContainer for \"5bc711b525444ec33f9d6e2295fd5473cbe685d01c92bb67387e09613f59f146\" returns successfully" Mar 17 18:51:49.581542 kubelet[2763]: I0317 18:51:49.581483 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-98cgr" podStartSLOduration=5.58146267 podStartE2EDuration="5.58146267s" podCreationTimestamp="2025-03-17 18:51:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:51:45.575499688 +0000 UTC m=+14.523703804" watchObservedRunningTime="2025-03-17 18:51:49.58146267 +0000 UTC m=+18.529666746" Mar 17 18:51:51.545105 kubelet[2763]: I0317 18:51:51.545053 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-7btxp" podStartSLOduration=3.087557755 podStartE2EDuration="6.545037357s" podCreationTimestamp="2025-03-17 18:51:45 +0000 UTC" firstStartedPulling="2025-03-17 18:51:45.630297015 +0000 UTC m=+14.578501131" lastFinishedPulling="2025-03-17 18:51:49.087776617 +0000 UTC m=+18.035980733" observedRunningTime="2025-03-17 18:51:49.582542237 +0000 UTC m=+18.530746353" watchObservedRunningTime="2025-03-17 18:51:51.545037357 +0000 UTC m=+20.493241473" Mar 17 18:51:53.003909 kernel: kauditd_printk_skb: 143 callbacks suppressed Mar 17 18:51:53.004070 kernel: audit: type=1325 audit(1742237512.983:297): table=filter:92 family=2 entries=15 op=nft_register_rule pid=3115 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:51:52.983000 audit[3115]: NETFILTER_CFG table=filter:92 family=2 entries=15 op=nft_register_rule pid=3115 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:51:52.983000 audit[3115]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=ffffc3dd4740 a2=0 a3=1 items=0 ppid=2898 pid=3115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:53.039960 kernel: audit: type=1300 audit(1742237512.983:297): arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=ffffc3dd4740 a2=0 a3=1 items=0 ppid=2898 pid=3115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:52.983000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:51:53.059108 kernel: audit: type=1327 audit(1742237512.983:297): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:51:53.009000 audit[3115]: NETFILTER_CFG table=nat:93 family=2 entries=12 op=nft_register_rule pid=3115 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:51:53.076662 kernel: audit: type=1325 audit(1742237513.009:298): table=nat:93 family=2 entries=12 op=nft_register_rule pid=3115 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:51:53.009000 audit[3115]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc3dd4740 a2=0 a3=1 items=0 ppid=2898 pid=3115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:53.110911 kernel: audit: type=1300 audit(1742237513.009:298): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc3dd4740 a2=0 a3=1 items=0 ppid=2898 pid=3115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:53.009000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:51:53.134897 kernel: audit: type=1327 audit(1742237513.009:298): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:51:53.139000 audit[3117]: NETFILTER_CFG table=filter:94 family=2 entries=16 op=nft_register_rule pid=3117 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:51:53.139000 audit[3117]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=fffffcbc7a20 a2=0 a3=1 items=0 ppid=2898 pid=3117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:53.187996 kernel: audit: type=1325 audit(1742237513.139:299): table=filter:94 family=2 entries=16 op=nft_register_rule pid=3117 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:51:53.188098 kernel: audit: type=1300 audit(1742237513.139:299): arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=fffffcbc7a20 a2=0 a3=1 items=0 ppid=2898 pid=3117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:53.139000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:51:53.203491 kernel: audit: type=1327 audit(1742237513.139:299): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:51:53.188000 audit[3117]: NETFILTER_CFG table=nat:95 family=2 entries=12 op=nft_register_rule pid=3117 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:51:53.218510 kernel: audit: type=1325 audit(1742237513.188:300): table=nat:95 family=2 entries=12 op=nft_register_rule pid=3117 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:51:53.188000 audit[3117]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffcbc7a20 a2=0 a3=1 items=0 ppid=2898 pid=3117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:53.188000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:51:54.115604 kubelet[2763]: I0317 18:51:54.115557 2763 topology_manager.go:215] "Topology Admit Handler" podUID="85fe48be-c6c4-4081-b678-22bcad9cc58e" podNamespace="calico-system" podName="calico-typha-77449894bd-wqnzv" Mar 17 18:51:54.217152 kubelet[2763]: I0317 18:51:54.217110 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85fe48be-c6c4-4081-b678-22bcad9cc58e-tigera-ca-bundle\") pod \"calico-typha-77449894bd-wqnzv\" (UID: \"85fe48be-c6c4-4081-b678-22bcad9cc58e\") " pod="calico-system/calico-typha-77449894bd-wqnzv" Mar 17 18:51:54.217345 kubelet[2763]: I0317 18:51:54.217330 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/85fe48be-c6c4-4081-b678-22bcad9cc58e-typha-certs\") pod \"calico-typha-77449894bd-wqnzv\" (UID: \"85fe48be-c6c4-4081-b678-22bcad9cc58e\") " pod="calico-system/calico-typha-77449894bd-wqnzv" Mar 17 18:51:54.217428 kubelet[2763]: I0317 18:51:54.217415 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fmkn\" (UniqueName: \"kubernetes.io/projected/85fe48be-c6c4-4081-b678-22bcad9cc58e-kube-api-access-8fmkn\") pod \"calico-typha-77449894bd-wqnzv\" (UID: \"85fe48be-c6c4-4081-b678-22bcad9cc58e\") " pod="calico-system/calico-typha-77449894bd-wqnzv" Mar 17 18:51:54.224000 audit[3119]: NETFILTER_CFG table=filter:96 family=2 entries=17 op=nft_register_rule pid=3119 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:51:54.224000 audit[3119]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6652 a0=3 a1=ffffee8974f0 a2=0 a3=1 items=0 ppid=2898 pid=3119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:54.224000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:51:54.230000 audit[3119]: NETFILTER_CFG table=nat:97 family=2 entries=12 op=nft_register_rule pid=3119 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:51:54.230000 audit[3119]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffee8974f0 a2=0 a3=1 items=0 ppid=2898 pid=3119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:54.230000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:51:54.555221 kubelet[2763]: I0317 18:51:54.555171 2763 topology_manager.go:215] "Topology Admit Handler" podUID="dc1f5898-8bbc-463b-9469-3cdc08a0b14d" podNamespace="calico-system" podName="calico-node-5nqvl" Mar 17 18:51:54.620620 kubelet[2763]: I0317 18:51:54.620574 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/dc1f5898-8bbc-463b-9469-3cdc08a0b14d-cni-bin-dir\") pod \"calico-node-5nqvl\" (UID: \"dc1f5898-8bbc-463b-9469-3cdc08a0b14d\") " pod="calico-system/calico-node-5nqvl" Mar 17 18:51:54.620818 kubelet[2763]: I0317 18:51:54.620801 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc1f5898-8bbc-463b-9469-3cdc08a0b14d-tigera-ca-bundle\") pod \"calico-node-5nqvl\" (UID: \"dc1f5898-8bbc-463b-9469-3cdc08a0b14d\") " pod="calico-system/calico-node-5nqvl" Mar 17 18:51:54.620934 kubelet[2763]: I0317 18:51:54.620919 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/dc1f5898-8bbc-463b-9469-3cdc08a0b14d-cni-net-dir\") pod \"calico-node-5nqvl\" (UID: \"dc1f5898-8bbc-463b-9469-3cdc08a0b14d\") " pod="calico-system/calico-node-5nqvl" Mar 17 18:51:54.621031 kubelet[2763]: I0317 18:51:54.621016 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dc1f5898-8bbc-463b-9469-3cdc08a0b14d-xtables-lock\") pod \"calico-node-5nqvl\" (UID: \"dc1f5898-8bbc-463b-9469-3cdc08a0b14d\") " pod="calico-system/calico-node-5nqvl" Mar 17 18:51:54.621139 kubelet[2763]: I0317 18:51:54.621102 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/dc1f5898-8bbc-463b-9469-3cdc08a0b14d-var-lib-calico\") pod \"calico-node-5nqvl\" (UID: \"dc1f5898-8bbc-463b-9469-3cdc08a0b14d\") " pod="calico-system/calico-node-5nqvl" Mar 17 18:51:54.621227 kubelet[2763]: I0317 18:51:54.621214 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/dc1f5898-8bbc-463b-9469-3cdc08a0b14d-cni-log-dir\") pod \"calico-node-5nqvl\" (UID: \"dc1f5898-8bbc-463b-9469-3cdc08a0b14d\") " pod="calico-system/calico-node-5nqvl" Mar 17 18:51:54.621317 kubelet[2763]: I0317 18:51:54.621304 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dc1f5898-8bbc-463b-9469-3cdc08a0b14d-lib-modules\") pod \"calico-node-5nqvl\" (UID: \"dc1f5898-8bbc-463b-9469-3cdc08a0b14d\") " pod="calico-system/calico-node-5nqvl" Mar 17 18:51:54.621406 kubelet[2763]: I0317 18:51:54.621393 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/dc1f5898-8bbc-463b-9469-3cdc08a0b14d-var-run-calico\") pod \"calico-node-5nqvl\" (UID: \"dc1f5898-8bbc-463b-9469-3cdc08a0b14d\") " pod="calico-system/calico-node-5nqvl" Mar 17 18:51:54.621490 kubelet[2763]: I0317 18:51:54.621475 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct784\" (UniqueName: \"kubernetes.io/projected/dc1f5898-8bbc-463b-9469-3cdc08a0b14d-kube-api-access-ct784\") pod \"calico-node-5nqvl\" (UID: \"dc1f5898-8bbc-463b-9469-3cdc08a0b14d\") " pod="calico-system/calico-node-5nqvl" Mar 17 18:51:54.621607 kubelet[2763]: I0317 18:51:54.621593 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/dc1f5898-8bbc-463b-9469-3cdc08a0b14d-flexvol-driver-host\") pod \"calico-node-5nqvl\" (UID: \"dc1f5898-8bbc-463b-9469-3cdc08a0b14d\") " pod="calico-system/calico-node-5nqvl" Mar 17 18:51:54.621690 kubelet[2763]: I0317 18:51:54.621677 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/dc1f5898-8bbc-463b-9469-3cdc08a0b14d-policysync\") pod \"calico-node-5nqvl\" (UID: \"dc1f5898-8bbc-463b-9469-3cdc08a0b14d\") " pod="calico-system/calico-node-5nqvl" Mar 17 18:51:54.621765 kubelet[2763]: I0317 18:51:54.621752 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/dc1f5898-8bbc-463b-9469-3cdc08a0b14d-node-certs\") pod \"calico-node-5nqvl\" (UID: \"dc1f5898-8bbc-463b-9469-3cdc08a0b14d\") " pod="calico-system/calico-node-5nqvl" Mar 17 18:51:54.719734 env[1566]: time="2025-03-17T18:51:54.719686093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-77449894bd-wqnzv,Uid:85fe48be-c6c4-4081-b678-22bcad9cc58e,Namespace:calico-system,Attempt:0,}" Mar 17 18:51:54.724679 kubelet[2763]: E0317 18:51:54.724644 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:54.724679 kubelet[2763]: W0317 18:51:54.724669 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:54.724810 kubelet[2763]: E0317 18:51:54.724691 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:54.724886 kubelet[2763]: E0317 18:51:54.724846 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:54.724886 kubelet[2763]: W0317 18:51:54.724860 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:54.724950 kubelet[2763]: E0317 18:51:54.724891 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:54.725574 kubelet[2763]: E0317 18:51:54.725039 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:54.725574 kubelet[2763]: W0317 18:51:54.725050 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:54.725574 kubelet[2763]: E0317 18:51:54.725059 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:54.725574 kubelet[2763]: E0317 18:51:54.725195 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:54.725574 kubelet[2763]: W0317 18:51:54.725204 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:54.725574 kubelet[2763]: E0317 18:51:54.725212 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:54.734181 kubelet[2763]: E0317 18:51:54.733697 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:54.734181 kubelet[2763]: W0317 18:51:54.733715 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:54.734181 kubelet[2763]: E0317 18:51:54.733730 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:54.777101 env[1566]: time="2025-03-17T18:51:54.777028787Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:51:54.777101 env[1566]: time="2025-03-17T18:51:54.777066826Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:51:54.777101 env[1566]: time="2025-03-17T18:51:54.777076826Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:51:54.777526 env[1566]: time="2025-03-17T18:51:54.777486534Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ebed26649815e429de3dfb99d7edee0efe5b3e24fac4c606256418186a54a6f3 pid=3136 runtime=io.containerd.runc.v2 Mar 17 18:51:54.824046 kubelet[2763]: E0317 18:51:54.823179 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:54.824246 kubelet[2763]: W0317 18:51:54.824221 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:54.824349 kubelet[2763]: E0317 18:51:54.824335 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:54.824697 kubelet[2763]: E0317 18:51:54.824673 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:54.824801 kubelet[2763]: W0317 18:51:54.824784 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:54.824890 kubelet[2763]: E0317 18:51:54.824855 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:54.826010 kubelet[2763]: E0317 18:51:54.825980 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:54.826010 kubelet[2763]: W0317 18:51:54.826009 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:54.826129 kubelet[2763]: E0317 18:51:54.826027 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:54.826230 kubelet[2763]: E0317 18:51:54.826216 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:54.826230 kubelet[2763]: W0317 18:51:54.826229 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:54.826293 kubelet[2763]: E0317 18:51:54.826238 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:54.826391 kubelet[2763]: E0317 18:51:54.826377 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:54.826391 kubelet[2763]: W0317 18:51:54.826388 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:54.826464 kubelet[2763]: E0317 18:51:54.826401 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:54.826570 kubelet[2763]: E0317 18:51:54.826555 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:54.826570 kubelet[2763]: W0317 18:51:54.826568 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:54.826631 kubelet[2763]: E0317 18:51:54.826577 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:54.826926 kubelet[2763]: E0317 18:51:54.826909 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:54.826926 kubelet[2763]: W0317 18:51:54.826923 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:54.827121 kubelet[2763]: E0317 18:51:54.826934 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:54.837179 env[1566]: time="2025-03-17T18:51:54.837138523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-77449894bd-wqnzv,Uid:85fe48be-c6c4-4081-b678-22bcad9cc58e,Namespace:calico-system,Attempt:0,} returns sandbox id \"ebed26649815e429de3dfb99d7edee0efe5b3e24fac4c606256418186a54a6f3\"" Mar 17 18:51:54.839052 env[1566]: time="2025-03-17T18:51:54.839023589Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Mar 17 18:51:54.859094 env[1566]: time="2025-03-17T18:51:54.859051141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5nqvl,Uid:dc1f5898-8bbc-463b-9469-3cdc08a0b14d,Namespace:calico-system,Attempt:0,}" Mar 17 18:51:54.914946 kubelet[2763]: I0317 18:51:54.912628 2763 topology_manager.go:215] "Topology Admit Handler" podUID="b729951e-7fde-40b6-a0f1-675d7c66febf" podNamespace="calico-system" podName="csi-node-driver-g7tkt" Mar 17 18:51:54.914946 kubelet[2763]: E0317 18:51:54.912975 2763 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g7tkt" podUID="b729951e-7fde-40b6-a0f1-675d7c66febf" Mar 17 18:51:54.931428 env[1566]: time="2025-03-17T18:51:54.931343212Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:51:54.931622 env[1566]: time="2025-03-17T18:51:54.931599084Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:51:54.931719 env[1566]: time="2025-03-17T18:51:54.931698202Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:51:54.932023 env[1566]: time="2025-03-17T18:51:54.931981994Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/446e4191b6f76f3ed1a157c98d804ff4da02901442216400716f23ffae83cc20 pid=3191 runtime=io.containerd.runc.v2 Mar 17 18:51:54.972093 env[1566]: time="2025-03-17T18:51:54.972052417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5nqvl,Uid:dc1f5898-8bbc-463b-9469-3cdc08a0b14d,Namespace:calico-system,Attempt:0,} returns sandbox id \"446e4191b6f76f3ed1a157c98d804ff4da02901442216400716f23ffae83cc20\"" Mar 17 18:51:55.003086 kubelet[2763]: E0317 18:51:55.003050 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:55.003086 kubelet[2763]: W0317 18:51:55.003077 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:55.003269 kubelet[2763]: E0317 18:51:55.003104 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:55.003269 kubelet[2763]: E0317 18:51:55.003258 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:55.003269 kubelet[2763]: W0317 18:51:55.003265 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:55.003332 kubelet[2763]: E0317 18:51:55.003274 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:55.003430 kubelet[2763]: E0317 18:51:55.003399 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:55.003430 kubelet[2763]: W0317 18:51:55.003413 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:55.003430 kubelet[2763]: E0317 18:51:55.003421 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:55.003556 kubelet[2763]: E0317 18:51:55.003538 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:55.003556 kubelet[2763]: W0317 18:51:55.003552 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:55.003615 kubelet[2763]: E0317 18:51:55.003560 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:55.003707 kubelet[2763]: E0317 18:51:55.003689 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:55.003707 kubelet[2763]: W0317 18:51:55.003703 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:55.003773 kubelet[2763]: E0317 18:51:55.003712 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:55.003842 kubelet[2763]: E0317 18:51:55.003825 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:55.003842 kubelet[2763]: W0317 18:51:55.003838 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:55.003939 kubelet[2763]: E0317 18:51:55.003846 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:55.004013 kubelet[2763]: E0317 18:51:55.003995 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:55.004013 kubelet[2763]: W0317 18:51:55.004010 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:55.004106 kubelet[2763]: E0317 18:51:55.004019 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:55.004175 kubelet[2763]: E0317 18:51:55.004148 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:55.004175 kubelet[2763]: W0317 18:51:55.004162 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:55.004175 kubelet[2763]: E0317 18:51:55.004170 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:55.004319 kubelet[2763]: E0317 18:51:55.004300 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:55.004319 kubelet[2763]: W0317 18:51:55.004316 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:55.004386 kubelet[2763]: E0317 18:51:55.004326 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:55.004462 kubelet[2763]: E0317 18:51:55.004443 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:55.004462 kubelet[2763]: W0317 18:51:55.004458 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:55.004525 kubelet[2763]: E0317 18:51:55.004466 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:55.004594 kubelet[2763]: E0317 18:51:55.004577 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:55.004594 kubelet[2763]: W0317 18:51:55.004590 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:55.004663 kubelet[2763]: E0317 18:51:55.004598 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:55.004739 kubelet[2763]: E0317 18:51:55.004719 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:55.004739 kubelet[2763]: W0317 18:51:55.004734 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:55.004802 kubelet[2763]: E0317 18:51:55.004742 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:55.004904 kubelet[2763]: E0317 18:51:55.004888 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:55.004904 kubelet[2763]: W0317 18:51:55.004902 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:55.004967 kubelet[2763]: E0317 18:51:55.004912 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:55.005054 kubelet[2763]: E0317 18:51:55.005035 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:55.005054 kubelet[2763]: W0317 18:51:55.005050 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:55.005123 kubelet[2763]: E0317 18:51:55.005059 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:55.005189 kubelet[2763]: E0317 18:51:55.005169 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:55.005189 kubelet[2763]: W0317 18:51:55.005176 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:55.005189 kubelet[2763]: E0317 18:51:55.005183 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:55.005321 kubelet[2763]: E0317 18:51:55.005303 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:55.005321 kubelet[2763]: W0317 18:51:55.005319 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:55.005378 kubelet[2763]: E0317 18:51:55.005327 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:55.005477 kubelet[2763]: E0317 18:51:55.005459 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:55.005477 kubelet[2763]: W0317 18:51:55.005474 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:55.005477 kubelet[2763]: E0317 18:51:55.005483 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:55.005614 kubelet[2763]: E0317 18:51:55.005597 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:55.005614 kubelet[2763]: W0317 18:51:55.005611 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:55.005695 kubelet[2763]: E0317 18:51:55.005619 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:55.005744 kubelet[2763]: E0317 18:51:55.005732 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:55.005744 kubelet[2763]: W0317 18:51:55.005739 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:55.005801 kubelet[2763]: E0317 18:51:55.005746 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:55.005903 kubelet[2763]: E0317 18:51:55.005888 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:55.005903 kubelet[2763]: W0317 18:51:55.005901 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:55.005986 kubelet[2763]: E0317 18:51:55.005910 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:55.026317 kubelet[2763]: E0317 18:51:55.026287 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:55.026317 kubelet[2763]: W0317 18:51:55.026309 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:55.026476 kubelet[2763]: E0317 18:51:55.026326 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:55.026476 kubelet[2763]: I0317 18:51:55.026356 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b729951e-7fde-40b6-a0f1-675d7c66febf-varrun\") pod \"csi-node-driver-g7tkt\" (UID: \"b729951e-7fde-40b6-a0f1-675d7c66febf\") " pod="calico-system/csi-node-driver-g7tkt" Mar 17 18:51:55.026574 kubelet[2763]: E0317 18:51:55.026549 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:55.026574 kubelet[2763]: W0317 18:51:55.026572 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:55.026656 kubelet[2763]: E0317 18:51:55.026588 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:55.026656 kubelet[2763]: I0317 18:51:55.026603 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b729951e-7fde-40b6-a0f1-675d7c66febf-socket-dir\") pod \"csi-node-driver-g7tkt\" (UID: \"b729951e-7fde-40b6-a0f1-675d7c66febf\") " pod="calico-system/csi-node-driver-g7tkt" Mar 17 18:51:55.026800 kubelet[2763]: E0317 18:51:55.026782 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:55.026841 kubelet[2763]: W0317 18:51:55.026801 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:55.026841 kubelet[2763]: E0317 18:51:55.026818 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:55.026841 kubelet[2763]: I0317 18:51:55.026835 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4nwp\" (UniqueName: \"kubernetes.io/projected/b729951e-7fde-40b6-a0f1-675d7c66febf-kube-api-access-w4nwp\") pod \"csi-node-driver-g7tkt\" (UID: \"b729951e-7fde-40b6-a0f1-675d7c66febf\") " pod="calico-system/csi-node-driver-g7tkt" Mar 17 18:51:55.027018 kubelet[2763]: E0317 18:51:55.027001 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:55.027018 kubelet[2763]: W0317 18:51:55.027016 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:55.027085 kubelet[2763]: E0317 18:51:55.027028 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:55.027085 kubelet[2763]: I0317 18:51:55.027042 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b729951e-7fde-40b6-a0f1-675d7c66febf-kubelet-dir\") pod \"csi-node-driver-g7tkt\" (UID: \"b729951e-7fde-40b6-a0f1-675d7c66febf\") " pod="calico-system/csi-node-driver-g7tkt" Mar 17 18:51:55.027207 kubelet[2763]: E0317 18:51:55.027187 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:55.027207 kubelet[2763]: W0317 18:51:55.027202 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:55.027269 kubelet[2763]: E0317 18:51:55.027211 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:55.027269 kubelet[2763]: I0317 18:51:55.027225 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b729951e-7fde-40b6-a0f1-675d7c66febf-registration-dir\") pod \"csi-node-driver-g7tkt\" (UID: \"b729951e-7fde-40b6-a0f1-675d7c66febf\") " pod="calico-system/csi-node-driver-g7tkt" Mar 17 18:51:55.027451 kubelet[2763]: E0317 18:51:55.027431 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:55.027451 kubelet[2763]: W0317 18:51:55.027447 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:55.027518 kubelet[2763]: E0317 18:51:55.027462 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:55.027634 kubelet[2763]: E0317 18:51:55.027619 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:55.027675 kubelet[2763]: W0317 18:51:55.027635 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:55.027732 kubelet[2763]: E0317 18:51:55.027713 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:55.027844 kubelet[2763]: E0317 18:51:55.027821 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:55.027844 kubelet[2763]: W0317 18:51:55.027836 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:55.027985 kubelet[2763]: E0317 18:51:55.027955 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:55.028034 kubelet[2763]: E0317 18:51:55.028000 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:55.028034 kubelet[2763]: W0317 18:51:55.028008 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:55.028034 kubelet[2763]: E0317 18:51:55.028021 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:55.028171 kubelet[2763]: E0317 18:51:55.028157 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:55.028171 kubelet[2763]: W0317 18:51:55.028169 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:55.028241 kubelet[2763]: E0317 18:51:55.028181 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:55.028328 kubelet[2763]: E0317 18:51:55.028313 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:55.028328 kubelet[2763]: W0317 18:51:55.028326 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:55.028402 kubelet[2763]: E0317 18:51:55.028337 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:55.028474 kubelet[2763]: E0317 18:51:55.028459 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:55.028474 kubelet[2763]: W0317 18:51:55.028471 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:55.028532 kubelet[2763]: E0317 18:51:55.028479 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:55.028667 kubelet[2763]: E0317 18:51:55.028649 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:55.028667 kubelet[2763]: W0317 18:51:55.028664 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:55.028738 kubelet[2763]: E0317 18:51:55.028673 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:55.028831 kubelet[2763]: E0317 18:51:55.028817 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:55.028831 kubelet[2763]: W0317 18:51:55.028829 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:55.028975 kubelet[2763]: E0317 18:51:55.028837 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:55.029002 kubelet[2763]: E0317 18:51:55.028992 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:55.029002 kubelet[2763]: W0317 18:51:55.029000 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:55.029046 kubelet[2763]: E0317 18:51:55.029008 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:55.128062 kubelet[2763]: E0317 18:51:55.127960 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:55.128062 kubelet[2763]: W0317 18:51:55.127986 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:55.128062 kubelet[2763]: E0317 18:51:55.128006 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:55.129481 kubelet[2763]: E0317 18:51:55.129123 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:55.129481 kubelet[2763]: W0317 18:51:55.129140 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:55.129481 kubelet[2763]: E0317 18:51:55.129165 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:55.129481 kubelet[2763]: E0317 18:51:55.129376 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:55.129481 kubelet[2763]: W0317 18:51:55.129385 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:55.129481 kubelet[2763]: E0317 18:51:55.129394 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:55.130174 kubelet[2763]: E0317 18:51:55.130048 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:55.130174 kubelet[2763]: W0317 18:51:55.130061 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:55.130174 kubelet[2763]: E0317 18:51:55.130076 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:55.130702 kubelet[2763]: E0317 18:51:55.130526 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:55.130702 kubelet[2763]: W0317 18:51:55.130539 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:55.130702 kubelet[2763]: E0317 18:51:55.130626 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:55.131123 kubelet[2763]: E0317 18:51:55.130991 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:55.131123 kubelet[2763]: W0317 18:51:55.131003 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:55.131123 kubelet[2763]: E0317 18:51:55.131094 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:55.131453 kubelet[2763]: E0317 18:51:55.131310 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:55.131453 kubelet[2763]: W0317 18:51:55.131320 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:55.131453 kubelet[2763]: E0317 18:51:55.131413 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:55.131736 kubelet[2763]: E0317 18:51:55.131602 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:55.131736 kubelet[2763]: W0317 18:51:55.131613 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:55.131736 kubelet[2763]: E0317 18:51:55.131693 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:55.132019 kubelet[2763]: E0317 18:51:55.131906 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:55.132019 kubelet[2763]: W0317 18:51:55.131917 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:55.132019 kubelet[2763]: E0317 18:51:55.131994 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:55.132329 kubelet[2763]: E0317 18:51:55.132212 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:55.132329 kubelet[2763]: W0317 18:51:55.132222 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:55.132329 kubelet[2763]: E0317 18:51:55.132307 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:55.132593 kubelet[2763]: E0317 18:51:55.132480 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:55.132593 kubelet[2763]: W0317 18:51:55.132489 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:55.132593 kubelet[2763]: E0317 18:51:55.132570 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:55.133057 kubelet[2763]: E0317 18:51:55.132753 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:55.133057 kubelet[2763]: W0317 18:51:55.132763 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:55.133057 kubelet[2763]: E0317 18:51:55.132849 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:55.133057 kubelet[2763]: E0317 18:51:55.132949 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:55.133057 kubelet[2763]: W0317 18:51:55.132957 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:55.133057 kubelet[2763]: E0317 18:51:55.133034 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:55.133395 kubelet[2763]: E0317 18:51:55.133288 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:55.133395 kubelet[2763]: W0317 18:51:55.133299 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:55.133395 kubelet[2763]: E0317 18:51:55.133315 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:55.133851 kubelet[2763]: E0317 18:51:55.133584 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:55.133851 kubelet[2763]: W0317 18:51:55.133595 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:55.133851 kubelet[2763]: E0317 18:51:55.133671 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:55.133851 kubelet[2763]: E0317 18:51:55.133757 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:55.133851 kubelet[2763]: W0317 18:51:55.133764 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:55.133851 kubelet[2763]: E0317 18:51:55.133833 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:55.134968 kubelet[2763]: E0317 18:51:55.133973 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:55.134968 kubelet[2763]: W0317 18:51:55.133981 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:55.134968 kubelet[2763]: E0317 18:51:55.134043 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:55.134968 kubelet[2763]: E0317 18:51:55.134172 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:55.134968 kubelet[2763]: W0317 18:51:55.134178 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:55.134968 kubelet[2763]: E0317 18:51:55.134248 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:55.134968 kubelet[2763]: E0317 18:51:55.134417 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:55.134968 kubelet[2763]: W0317 18:51:55.134425 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:55.134968 kubelet[2763]: E0317 18:51:55.134527 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:55.134968 kubelet[2763]: E0317 18:51:55.134621 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:55.135168 kubelet[2763]: W0317 18:51:55.134629 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:55.135168 kubelet[2763]: E0317 18:51:55.134639 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:55.135403 kubelet[2763]: E0317 18:51:55.135260 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:55.135403 kubelet[2763]: W0317 18:51:55.135275 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:55.135403 kubelet[2763]: E0317 18:51:55.135290 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:55.135712 kubelet[2763]: E0317 18:51:55.135575 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:55.135712 kubelet[2763]: W0317 18:51:55.135585 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:55.135712 kubelet[2763]: E0317 18:51:55.135684 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:55.136084 kubelet[2763]: E0317 18:51:55.135944 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:55.136084 kubelet[2763]: W0317 18:51:55.135955 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:55.136084 kubelet[2763]: E0317 18:51:55.136052 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:55.136391 kubelet[2763]: E0317 18:51:55.136270 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:55.136391 kubelet[2763]: W0317 18:51:55.136281 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:55.136391 kubelet[2763]: E0317 18:51:55.136362 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:55.136810 kubelet[2763]: E0317 18:51:55.136550 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:55.136810 kubelet[2763]: W0317 18:51:55.136559 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:55.136810 kubelet[2763]: E0317 18:51:55.136570 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:55.231287 kubelet[2763]: E0317 18:51:55.231261 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:55.231463 kubelet[2763]: W0317 18:51:55.231446 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:55.231531 kubelet[2763]: E0317 18:51:55.231519 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:55.240000 audit[3290]: NETFILTER_CFG table=filter:98 family=2 entries=18 op=nft_register_rule pid=3290 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:51:55.240000 audit[3290]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6652 a0=3 a1=ffffcb6c3fb0 a2=0 a3=1 items=0 ppid=2898 pid=3290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:55.240000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:51:55.259000 audit[3290]: NETFILTER_CFG table=nat:99 family=2 entries=12 op=nft_register_rule pid=3290 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:51:55.259000 audit[3290]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffcb6c3fb0 a2=0 a3=1 items=0 ppid=2898 pid=3290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:51:55.259000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:51:55.269234 kubelet[2763]: E0317 18:51:55.269206 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:55.269408 kubelet[2763]: W0317 18:51:55.269393 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:55.269477 kubelet[2763]: E0317 18:51:55.269464 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:56.336163 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2141439231.mount: Deactivated successfully. Mar 17 18:51:56.532406 kubelet[2763]: E0317 18:51:56.532360 2763 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g7tkt" podUID="b729951e-7fde-40b6-a0f1-675d7c66febf" Mar 17 18:51:56.988722 env[1566]: time="2025-03-17T18:51:56.988659017Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:57.001163 env[1566]: time="2025-03-17T18:51:57.001117796Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:57.007119 env[1566]: time="2025-03-17T18:51:57.007062555Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:57.012794 env[1566]: time="2025-03-17T18:51:57.012749242Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:57.013304 env[1566]: time="2025-03-17T18:51:57.013273108Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Mar 17 18:51:57.019220 env[1566]: time="2025-03-17T18:51:57.017178603Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Mar 17 18:51:57.035646 env[1566]: time="2025-03-17T18:51:57.035593426Z" level=info msg="CreateContainer within sandbox \"ebed26649815e429de3dfb99d7edee0efe5b3e24fac4c606256418186a54a6f3\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 17 18:51:57.073237 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1697854796.mount: Deactivated successfully. Mar 17 18:51:57.097995 env[1566]: time="2025-03-17T18:51:57.097924986Z" level=info msg="CreateContainer within sandbox \"ebed26649815e429de3dfb99d7edee0efe5b3e24fac4c606256418186a54a6f3\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"94a9a91228abbb014eca52c41d7478dacd374e621b8d78e779b15af94a32939f\"" Mar 17 18:51:57.098785 env[1566]: time="2025-03-17T18:51:57.098734004Z" level=info msg="StartContainer for \"94a9a91228abbb014eca52c41d7478dacd374e621b8d78e779b15af94a32939f\"" Mar 17 18:51:57.166650 env[1566]: time="2025-03-17T18:51:57.164054084Z" level=info msg="StartContainer for \"94a9a91228abbb014eca52c41d7478dacd374e621b8d78e779b15af94a32939f\" returns successfully" Mar 17 18:51:57.628418 kubelet[2763]: E0317 18:51:57.628379 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:57.628418 kubelet[2763]: W0317 18:51:57.628407 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:57.628418 kubelet[2763]: E0317 18:51:57.628430 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:57.628832 kubelet[2763]: E0317 18:51:57.628595 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:57.628832 kubelet[2763]: W0317 18:51:57.628602 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:57.628832 kubelet[2763]: E0317 18:51:57.628611 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:57.628832 kubelet[2763]: E0317 18:51:57.628736 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:57.628832 kubelet[2763]: W0317 18:51:57.628743 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:57.628832 kubelet[2763]: E0317 18:51:57.628751 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:57.629012 kubelet[2763]: E0317 18:51:57.628888 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:57.629012 kubelet[2763]: W0317 18:51:57.628896 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:57.629012 kubelet[2763]: E0317 18:51:57.628905 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:57.629080 kubelet[2763]: E0317 18:51:57.629042 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:57.629080 kubelet[2763]: W0317 18:51:57.629049 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:57.629080 kubelet[2763]: E0317 18:51:57.629057 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:57.629194 kubelet[2763]: E0317 18:51:57.629168 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:57.629194 kubelet[2763]: W0317 18:51:57.629182 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:57.629194 kubelet[2763]: E0317 18:51:57.629190 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:57.629323 kubelet[2763]: E0317 18:51:57.629305 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:57.629323 kubelet[2763]: W0317 18:51:57.629318 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:57.629387 kubelet[2763]: E0317 18:51:57.629328 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:57.629469 kubelet[2763]: E0317 18:51:57.629452 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:57.629469 kubelet[2763]: W0317 18:51:57.629464 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:57.629542 kubelet[2763]: E0317 18:51:57.629472 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:57.629625 kubelet[2763]: E0317 18:51:57.629605 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:57.629625 kubelet[2763]: W0317 18:51:57.629619 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:57.629625 kubelet[2763]: E0317 18:51:57.629627 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:57.629761 kubelet[2763]: E0317 18:51:57.629743 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:57.629761 kubelet[2763]: W0317 18:51:57.629756 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:57.629831 kubelet[2763]: E0317 18:51:57.629764 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:57.629949 kubelet[2763]: E0317 18:51:57.629932 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:57.629949 kubelet[2763]: W0317 18:51:57.629946 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:57.630026 kubelet[2763]: E0317 18:51:57.629954 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:57.630099 kubelet[2763]: E0317 18:51:57.630080 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:57.630099 kubelet[2763]: W0317 18:51:57.630093 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:57.630173 kubelet[2763]: E0317 18:51:57.630102 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:57.630245 kubelet[2763]: E0317 18:51:57.630227 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:57.631280 kubelet[2763]: W0317 18:51:57.630406 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:57.631280 kubelet[2763]: E0317 18:51:57.630428 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:57.631280 kubelet[2763]: E0317 18:51:57.630603 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:57.631280 kubelet[2763]: W0317 18:51:57.630611 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:57.631280 kubelet[2763]: E0317 18:51:57.630619 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:57.631280 kubelet[2763]: E0317 18:51:57.630741 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:57.631280 kubelet[2763]: W0317 18:51:57.630748 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:57.631280 kubelet[2763]: E0317 18:51:57.630755 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:57.650389 kubelet[2763]: E0317 18:51:57.650361 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:57.650584 kubelet[2763]: W0317 18:51:57.650566 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:57.650664 kubelet[2763]: E0317 18:51:57.650651 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:57.650995 kubelet[2763]: E0317 18:51:57.650981 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:57.651092 kubelet[2763]: W0317 18:51:57.651078 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:57.651169 kubelet[2763]: E0317 18:51:57.651155 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:57.651478 kubelet[2763]: E0317 18:51:57.651453 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:57.651478 kubelet[2763]: W0317 18:51:57.651472 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:57.651568 kubelet[2763]: E0317 18:51:57.651493 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:57.651651 kubelet[2763]: E0317 18:51:57.651637 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:57.651651 kubelet[2763]: W0317 18:51:57.651650 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:57.651717 kubelet[2763]: E0317 18:51:57.651662 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:57.651816 kubelet[2763]: E0317 18:51:57.651798 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:57.651816 kubelet[2763]: W0317 18:51:57.651812 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:57.651905 kubelet[2763]: E0317 18:51:57.651820 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:57.652078 kubelet[2763]: E0317 18:51:57.652059 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:57.652078 kubelet[2763]: W0317 18:51:57.652073 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:57.652145 kubelet[2763]: E0317 18:51:57.652089 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:57.652457 kubelet[2763]: E0317 18:51:57.652411 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:57.652537 kubelet[2763]: W0317 18:51:57.652523 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:57.652628 kubelet[2763]: E0317 18:51:57.652614 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:57.652811 kubelet[2763]: E0317 18:51:57.652792 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:57.652811 kubelet[2763]: W0317 18:51:57.652809 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:57.652917 kubelet[2763]: E0317 18:51:57.652825 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:57.653066 kubelet[2763]: E0317 18:51:57.653033 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:57.653066 kubelet[2763]: W0317 18:51:57.653062 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:57.653143 kubelet[2763]: E0317 18:51:57.653080 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:57.653245 kubelet[2763]: E0317 18:51:57.653217 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:57.653245 kubelet[2763]: W0317 18:51:57.653231 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:57.653245 kubelet[2763]: E0317 18:51:57.653240 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:57.653408 kubelet[2763]: E0317 18:51:57.653392 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:57.653408 kubelet[2763]: W0317 18:51:57.653406 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:57.653473 kubelet[2763]: E0317 18:51:57.653419 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:57.653766 kubelet[2763]: E0317 18:51:57.653746 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:57.653766 kubelet[2763]: W0317 18:51:57.653763 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:57.653914 kubelet[2763]: E0317 18:51:57.653895 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:57.654015 kubelet[2763]: E0317 18:51:57.653940 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:57.654087 kubelet[2763]: W0317 18:51:57.654072 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:57.654182 kubelet[2763]: E0317 18:51:57.654155 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:57.654430 kubelet[2763]: E0317 18:51:57.654412 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:57.654501 kubelet[2763]: W0317 18:51:57.654487 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:57.654560 kubelet[2763]: E0317 18:51:57.654548 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:57.654828 kubelet[2763]: E0317 18:51:57.654814 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:57.654930 kubelet[2763]: W0317 18:51:57.654914 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:57.655014 kubelet[2763]: E0317 18:51:57.655001 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:57.655265 kubelet[2763]: E0317 18:51:57.655251 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:57.655339 kubelet[2763]: W0317 18:51:57.655324 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:57.655400 kubelet[2763]: E0317 18:51:57.655387 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:57.655710 kubelet[2763]: E0317 18:51:57.655695 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:57.655799 kubelet[2763]: W0317 18:51:57.655784 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:57.655905 kubelet[2763]: E0317 18:51:57.655855 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:57.656375 kubelet[2763]: E0317 18:51:57.656357 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:51:57.656464 kubelet[2763]: W0317 18:51:57.656448 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:51:57.656547 kubelet[2763]: E0317 18:51:57.656533 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:51:58.132992 waagent[1788]: 2025-03-17T18:51:58.132850Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Mar 17 18:51:58.143699 waagent[1788]: 2025-03-17T18:51:58.143622Z INFO ExtHandler Mar 17 18:51:58.144120 waagent[1788]: 2025-03-17T18:51:58.144061Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: ebf700fd-8a98-43fd-996a-e61665646f51 eTag: 8018821786497874017 source: Fabric] Mar 17 18:51:58.145137 waagent[1788]: 2025-03-17T18:51:58.145069Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Mar 17 18:51:58.146685 waagent[1788]: 2025-03-17T18:51:58.146617Z INFO ExtHandler Mar 17 18:51:58.146986 waagent[1788]: 2025-03-17T18:51:58.146930Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Mar 17 18:51:58.256587 waagent[1788]: 2025-03-17T18:51:58.256383Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Mar 17 18:51:58.385851 waagent[1788]: 2025-03-17T18:51:58.385618Z INFO ExtHandler Downloaded certificate {'thumbprint': 'D5D03C8EC0519BD58507CCA8C3BC2E70F44A956F', 'hasPrivateKey': True} Mar 17 18:51:58.387491 waagent[1788]: 2025-03-17T18:51:58.387362Z INFO ExtHandler Downloaded certificate {'thumbprint': '16252F8E366E7F65C5140DAF5CA3D0E18F269B2E', 'hasPrivateKey': False} Mar 17 18:51:58.389127 waagent[1788]: 2025-03-17T18:51:58.388974Z INFO ExtHandler Fetch goal state completed Mar 17 18:51:58.390348 waagent[1788]: 2025-03-17T18:51:58.390258Z INFO ExtHandler ExtHandler VM enabled for RSM updates, switching to RSM update mode Mar 17 18:51:58.392039 waagent[1788]: 2025-03-17T18:51:58.391956Z INFO ExtHandler ExtHandler Mar 17 18:51:58.392201 waagent[1788]: 2025-03-17T18:51:58.392144Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: 3c0eeed3-8802-4167-b038-074676bcd1e8 correlation 48065f79-1003-43b5-9b72-51b0088b45a5 created: 2025-03-17T18:51:48.536156Z] Mar 17 18:51:58.393139 waagent[1788]: 2025-03-17T18:51:58.393062Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Mar 17 18:51:58.395334 waagent[1788]: 2025-03-17T18:51:58.395244Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 3 ms] Mar 17 18:51:58.458624 env[1566]: time="2025-03-17T18:51:58.458578432Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:58.468989 env[1566]: time="2025-03-17T18:51:58.468950517Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:58.476133 env[1566]: time="2025-03-17T18:51:58.476092887Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:58.481819 env[1566]: time="2025-03-17T18:51:58.481762417Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:58.482107 env[1566]: time="2025-03-17T18:51:58.482070649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Mar 17 18:51:58.486655 env[1566]: time="2025-03-17T18:51:58.486592529Z" level=info msg="CreateContainer within sandbox \"446e4191b6f76f3ed1a157c98d804ff4da02901442216400716f23ffae83cc20\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 17 18:51:58.528483 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2895043119.mount: Deactivated successfully. Mar 17 18:51:58.531945 kubelet[2763]: E0317 18:51:58.531268 2763 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g7tkt" podUID="b729951e-7fde-40b6-a0f1-675d7c66febf" Mar 17 18:51:58.552388 env[1566]: time="2025-03-17T18:51:58.552340946Z" level=info msg="CreateContainer within sandbox \"446e4191b6f76f3ed1a157c98d804ff4da02901442216400716f23ffae83cc20\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"2fb80cd7ef9da4198ade2999e95ab29db0e466c6696d2a10485067ca7cc0b487\"" Mar 17 18:51:58.554404 env[1566]: time="2025-03-17T18:51:58.554086259Z" level=info msg="StartContainer for \"2fb80cd7ef9da4198ade2999e95ab29db0e466c6696d2a10485067ca7cc0b487\"" Mar 17 18:51:58.579364 systemd[1]: run-containerd-runc-k8s.io-2fb80cd7ef9da4198ade2999e95ab29db0e466c6696d2a10485067ca7cc0b487-runc.YO5YoU.mount: Deactivated successfully. Mar 17 18:51:58.588636 kubelet[2763]: I0317 18:51:58.587960 2763 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 18:51:58.619743 env[1566]: time="2025-03-17T18:51:58.619666360Z" level=info msg="StartContainer for \"2fb80cd7ef9da4198ade2999e95ab29db0e466c6696d2a10485067ca7cc0b487\" returns successfully" Mar 17 18:51:59.520568 env[1566]: time="2025-03-17T18:51:59.520519014Z" level=info msg="shim disconnected" id=2fb80cd7ef9da4198ade2999e95ab29db0e466c6696d2a10485067ca7cc0b487 Mar 17 18:51:59.521162 env[1566]: time="2025-03-17T18:51:59.521126318Z" level=warning msg="cleaning up after shim disconnected" id=2fb80cd7ef9da4198ade2999e95ab29db0e466c6696d2a10485067ca7cc0b487 namespace=k8s.io Mar 17 18:51:59.521265 env[1566]: time="2025-03-17T18:51:59.521249715Z" level=info msg="cleaning up dead shim" Mar 17 18:51:59.524609 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2fb80cd7ef9da4198ade2999e95ab29db0e466c6696d2a10485067ca7cc0b487-rootfs.mount: Deactivated successfully. Mar 17 18:51:59.533641 env[1566]: time="2025-03-17T18:51:59.533602552Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:51:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3424 runtime=io.containerd.runc.v2\n" Mar 17 18:51:59.592875 env[1566]: time="2025-03-17T18:51:59.592826927Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Mar 17 18:51:59.859592 kubelet[2763]: I0317 18:51:59.859230 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-77449894bd-wqnzv" podStartSLOduration=3.682468207 podStartE2EDuration="5.859212217s" podCreationTimestamp="2025-03-17 18:51:54 +0000 UTC" firstStartedPulling="2025-03-17 18:51:54.838268371 +0000 UTC m=+23.786472447" lastFinishedPulling="2025-03-17 18:51:57.015012341 +0000 UTC m=+25.963216457" observedRunningTime="2025-03-17 18:51:57.600669475 +0000 UTC m=+26.548873591" watchObservedRunningTime="2025-03-17 18:51:59.859212217 +0000 UTC m=+28.807416333" Mar 17 18:52:00.531794 kubelet[2763]: E0317 18:52:00.531737 2763 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g7tkt" podUID="b729951e-7fde-40b6-a0f1-675d7c66febf" Mar 17 18:52:02.531461 kubelet[2763]: E0317 18:52:02.531400 2763 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g7tkt" podUID="b729951e-7fde-40b6-a0f1-675d7c66febf" Mar 17 18:52:03.359448 env[1566]: time="2025-03-17T18:52:03.359388788Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:52:03.371464 env[1566]: time="2025-03-17T18:52:03.371419773Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:52:03.379568 env[1566]: time="2025-03-17T18:52:03.379516134Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:52:03.384078 env[1566]: time="2025-03-17T18:52:03.384037303Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:52:03.385485 env[1566]: time="2025-03-17T18:52:03.385450468Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Mar 17 18:52:03.387840 env[1566]: time="2025-03-17T18:52:03.387800411Z" level=info msg="CreateContainer within sandbox \"446e4191b6f76f3ed1a157c98d804ff4da02901442216400716f23ffae83cc20\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 17 18:52:03.427127 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount387267224.mount: Deactivated successfully. Mar 17 18:52:03.452444 env[1566]: time="2025-03-17T18:52:03.452369387Z" level=info msg="CreateContainer within sandbox \"446e4191b6f76f3ed1a157c98d804ff4da02901442216400716f23ffae83cc20\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"fd9c5de0d7d50b45af15ad799498e5ab5e5b70ef0bf8b20005cf15c5a3abaec2\"" Mar 17 18:52:03.453960 env[1566]: time="2025-03-17T18:52:03.453920789Z" level=info msg="StartContainer for \"fd9c5de0d7d50b45af15ad799498e5ab5e5b70ef0bf8b20005cf15c5a3abaec2\"" Mar 17 18:52:03.520666 env[1566]: time="2025-03-17T18:52:03.520617794Z" level=info msg="StartContainer for \"fd9c5de0d7d50b45af15ad799498e5ab5e5b70ef0bf8b20005cf15c5a3abaec2\" returns successfully" Mar 17 18:52:04.531121 kubelet[2763]: E0317 18:52:04.531068 2763 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g7tkt" podUID="b729951e-7fde-40b6-a0f1-675d7c66febf" Mar 17 18:52:04.697642 env[1566]: time="2025-03-17T18:52:04.697562023Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 18:52:04.718762 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fd9c5de0d7d50b45af15ad799498e5ab5e5b70ef0bf8b20005cf15c5a3abaec2-rootfs.mount: Deactivated successfully. Mar 17 18:52:04.750482 kubelet[2763]: I0317 18:52:04.749502 2763 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Mar 17 18:52:04.865703 kubelet[2763]: I0317 18:52:04.865581 2763 topology_manager.go:215] "Topology Admit Handler" podUID="c8147012-69db-48aa-96fa-b27553fe56b8" podNamespace="kube-system" podName="coredns-7db6d8ff4d-wlmhz" Mar 17 18:52:04.901160 kubelet[2763]: I0317 18:52:04.901107 2763 topology_manager.go:215] "Topology Admit Handler" podUID="3418e3de-8e64-4cf5-99b4-25c5564ac718" podNamespace="kube-system" podName="coredns-7db6d8ff4d-d5v52" Mar 17 18:52:04.904101 kubelet[2763]: I0317 18:52:04.904051 2763 topology_manager.go:215] "Topology Admit Handler" podUID="7edfd8b5-e741-4ba4-b5a0-38f1929fe58b" podNamespace="calico-apiserver" podName="calico-apiserver-7b85fdb584-rpw9l" Mar 17 18:52:04.909256 kubelet[2763]: I0317 18:52:04.909202 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c8147012-69db-48aa-96fa-b27553fe56b8-config-volume\") pod \"coredns-7db6d8ff4d-wlmhz\" (UID: \"c8147012-69db-48aa-96fa-b27553fe56b8\") " pod="kube-system/coredns-7db6d8ff4d-wlmhz" Mar 17 18:52:04.909256 kubelet[2763]: I0317 18:52:04.909244 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w5dh\" (UniqueName: \"kubernetes.io/projected/c8147012-69db-48aa-96fa-b27553fe56b8-kube-api-access-9w5dh\") pod \"coredns-7db6d8ff4d-wlmhz\" (UID: \"c8147012-69db-48aa-96fa-b27553fe56b8\") " pod="kube-system/coredns-7db6d8ff4d-wlmhz" Mar 17 18:52:04.915240 kubelet[2763]: I0317 18:52:04.915188 2763 topology_manager.go:215] "Topology Admit Handler" podUID="65d68753-4b96-44f6-80b9-42dfef957a45" podNamespace="calico-system" podName="calico-kube-controllers-556dcdff49-tlrdg" Mar 17 18:52:04.915639 kubelet[2763]: I0317 18:52:04.915608 2763 topology_manager.go:215] "Topology Admit Handler" podUID="3da68d2e-dbd2-4641-b6b6-c1e92b514d10" podNamespace="calico-apiserver" podName="calico-apiserver-7b85fdb584-4542j" Mar 17 18:52:05.009629 kubelet[2763]: I0317 18:52:05.009558 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffgq6\" (UniqueName: \"kubernetes.io/projected/65d68753-4b96-44f6-80b9-42dfef957a45-kube-api-access-ffgq6\") pod \"calico-kube-controllers-556dcdff49-tlrdg\" (UID: \"65d68753-4b96-44f6-80b9-42dfef957a45\") " pod="calico-system/calico-kube-controllers-556dcdff49-tlrdg" Mar 17 18:52:05.009959 kubelet[2763]: I0317 18:52:05.009937 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdcgx\" (UniqueName: \"kubernetes.io/projected/3da68d2e-dbd2-4641-b6b6-c1e92b514d10-kube-api-access-zdcgx\") pod \"calico-apiserver-7b85fdb584-4542j\" (UID: \"3da68d2e-dbd2-4641-b6b6-c1e92b514d10\") " pod="calico-apiserver/calico-apiserver-7b85fdb584-4542j" Mar 17 18:52:05.010315 kubelet[2763]: I0317 18:52:05.010296 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7edfd8b5-e741-4ba4-b5a0-38f1929fe58b-calico-apiserver-certs\") pod \"calico-apiserver-7b85fdb584-rpw9l\" (UID: \"7edfd8b5-e741-4ba4-b5a0-38f1929fe58b\") " pod="calico-apiserver/calico-apiserver-7b85fdb584-rpw9l" Mar 17 18:52:05.010438 kubelet[2763]: I0317 18:52:05.010424 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3da68d2e-dbd2-4641-b6b6-c1e92b514d10-calico-apiserver-certs\") pod \"calico-apiserver-7b85fdb584-4542j\" (UID: \"3da68d2e-dbd2-4641-b6b6-c1e92b514d10\") " pod="calico-apiserver/calico-apiserver-7b85fdb584-4542j" Mar 17 18:52:05.020837 kubelet[2763]: I0317 18:52:05.011783 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsh8s\" (UniqueName: \"kubernetes.io/projected/7edfd8b5-e741-4ba4-b5a0-38f1929fe58b-kube-api-access-lsh8s\") pod \"calico-apiserver-7b85fdb584-rpw9l\" (UID: \"7edfd8b5-e741-4ba4-b5a0-38f1929fe58b\") " pod="calico-apiserver/calico-apiserver-7b85fdb584-rpw9l" Mar 17 18:52:05.020837 kubelet[2763]: I0317 18:52:05.011823 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpgsp\" (UniqueName: \"kubernetes.io/projected/3418e3de-8e64-4cf5-99b4-25c5564ac718-kube-api-access-qpgsp\") pod \"coredns-7db6d8ff4d-d5v52\" (UID: \"3418e3de-8e64-4cf5-99b4-25c5564ac718\") " pod="kube-system/coredns-7db6d8ff4d-d5v52" Mar 17 18:52:05.020837 kubelet[2763]: I0317 18:52:05.011843 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65d68753-4b96-44f6-80b9-42dfef957a45-tigera-ca-bundle\") pod \"calico-kube-controllers-556dcdff49-tlrdg\" (UID: \"65d68753-4b96-44f6-80b9-42dfef957a45\") " pod="calico-system/calico-kube-controllers-556dcdff49-tlrdg" Mar 17 18:52:05.020837 kubelet[2763]: I0317 18:52:05.011880 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3418e3de-8e64-4cf5-99b4-25c5564ac718-config-volume\") pod \"coredns-7db6d8ff4d-d5v52\" (UID: \"3418e3de-8e64-4cf5-99b4-25c5564ac718\") " pod="kube-system/coredns-7db6d8ff4d-d5v52" Mar 17 18:52:05.171803 env[1566]: time="2025-03-17T18:52:05.171607829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wlmhz,Uid:c8147012-69db-48aa-96fa-b27553fe56b8,Namespace:kube-system,Attempt:0,}" Mar 17 18:52:05.228656 env[1566]: time="2025-03-17T18:52:05.228612192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-556dcdff49-tlrdg,Uid:65d68753-4b96-44f6-80b9-42dfef957a45,Namespace:calico-system,Attempt:0,}" Mar 17 18:52:05.504700 env[1566]: time="2025-03-17T18:52:05.504617980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-d5v52,Uid:3418e3de-8e64-4cf5-99b4-25c5564ac718,Namespace:kube-system,Attempt:0,}" Mar 17 18:52:05.510542 env[1566]: time="2025-03-17T18:52:05.510488840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b85fdb584-rpw9l,Uid:7edfd8b5-e741-4ba4-b5a0-38f1929fe58b,Namespace:calico-apiserver,Attempt:0,}" Mar 17 18:52:05.519011 env[1566]: time="2025-03-17T18:52:05.518959398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b85fdb584-4542j,Uid:3da68d2e-dbd2-4641-b6b6-c1e92b514d10,Namespace:calico-apiserver,Attempt:0,}" Mar 17 18:52:05.907264 env[1566]: time="2025-03-17T18:52:05.907139555Z" level=info msg="shim disconnected" id=fd9c5de0d7d50b45af15ad799498e5ab5e5b70ef0bf8b20005cf15c5a3abaec2 Mar 17 18:52:05.907909 env[1566]: time="2025-03-17T18:52:05.907856938Z" level=warning msg="cleaning up after shim disconnected" id=fd9c5de0d7d50b45af15ad799498e5ab5e5b70ef0bf8b20005cf15c5a3abaec2 namespace=k8s.io Mar 17 18:52:05.908030 env[1566]: time="2025-03-17T18:52:05.908014574Z" level=info msg="cleaning up dead shim" Mar 17 18:52:05.916024 env[1566]: time="2025-03-17T18:52:05.915962145Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:52:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3496 runtime=io.containerd.runc.v2\n" Mar 17 18:52:06.248051 env[1566]: time="2025-03-17T18:52:06.247980363Z" level=error msg="Failed to destroy network for sandbox \"e0a1c8f1e37e1b7623414e7048fba3842698a32a380236544763d85e6b3a6e6c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:52:06.248557 env[1566]: time="2025-03-17T18:52:06.248523510Z" level=error msg="encountered an error cleaning up failed sandbox \"e0a1c8f1e37e1b7623414e7048fba3842698a32a380236544763d85e6b3a6e6c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:52:06.248727 env[1566]: time="2025-03-17T18:52:06.248665947Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wlmhz,Uid:c8147012-69db-48aa-96fa-b27553fe56b8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e0a1c8f1e37e1b7623414e7048fba3842698a32a380236544763d85e6b3a6e6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:52:06.249093 kubelet[2763]: E0317 18:52:06.249047 2763 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0a1c8f1e37e1b7623414e7048fba3842698a32a380236544763d85e6b3a6e6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:52:06.250836 kubelet[2763]: E0317 18:52:06.249117 2763 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0a1c8f1e37e1b7623414e7048fba3842698a32a380236544763d85e6b3a6e6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-wlmhz" Mar 17 18:52:06.250836 kubelet[2763]: E0317 18:52:06.249138 2763 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0a1c8f1e37e1b7623414e7048fba3842698a32a380236544763d85e6b3a6e6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-wlmhz" Mar 17 18:52:06.250836 kubelet[2763]: E0317 18:52:06.249180 2763 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-wlmhz_kube-system(c8147012-69db-48aa-96fa-b27553fe56b8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-wlmhz_kube-system(c8147012-69db-48aa-96fa-b27553fe56b8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e0a1c8f1e37e1b7623414e7048fba3842698a32a380236544763d85e6b3a6e6c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-wlmhz" podUID="c8147012-69db-48aa-96fa-b27553fe56b8" Mar 17 18:52:06.255097 env[1566]: time="2025-03-17T18:52:06.255040477Z" level=error msg="Failed to destroy network for sandbox \"e065047c826c98b68e6041689fa20f77b100c59a650cc57df87ad96e87c51334\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:52:06.256221 env[1566]: time="2025-03-17T18:52:06.255478747Z" level=error msg="encountered an error cleaning up failed sandbox \"e065047c826c98b68e6041689fa20f77b100c59a650cc57df87ad96e87c51334\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:52:06.256221 env[1566]: time="2025-03-17T18:52:06.255568225Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-d5v52,Uid:3418e3de-8e64-4cf5-99b4-25c5564ac718,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e065047c826c98b68e6041689fa20f77b100c59a650cc57df87ad96e87c51334\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:52:06.256365 kubelet[2763]: E0317 18:52:06.256108 2763 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e065047c826c98b68e6041689fa20f77b100c59a650cc57df87ad96e87c51334\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:52:06.256365 kubelet[2763]: E0317 18:52:06.256161 2763 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e065047c826c98b68e6041689fa20f77b100c59a650cc57df87ad96e87c51334\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-d5v52" Mar 17 18:52:06.256365 kubelet[2763]: E0317 18:52:06.256181 2763 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e065047c826c98b68e6041689fa20f77b100c59a650cc57df87ad96e87c51334\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-d5v52" Mar 17 18:52:06.257834 kubelet[2763]: E0317 18:52:06.256223 2763 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-d5v52_kube-system(3418e3de-8e64-4cf5-99b4-25c5564ac718)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-d5v52_kube-system(3418e3de-8e64-4cf5-99b4-25c5564ac718)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e065047c826c98b68e6041689fa20f77b100c59a650cc57df87ad96e87c51334\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-d5v52" podUID="3418e3de-8e64-4cf5-99b4-25c5564ac718" Mar 17 18:52:06.258158 env[1566]: time="2025-03-17T18:52:06.258098846Z" level=error msg="Failed to destroy network for sandbox \"b96d7dd36c70dfdd23967d114fa64e7431ab0f79cc1035c1d755e3d694374f0a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:52:06.259195 env[1566]: time="2025-03-17T18:52:06.259145461Z" level=error msg="encountered an error cleaning up failed sandbox \"b96d7dd36c70dfdd23967d114fa64e7431ab0f79cc1035c1d755e3d694374f0a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:52:06.259268 env[1566]: time="2025-03-17T18:52:06.259218579Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-556dcdff49-tlrdg,Uid:65d68753-4b96-44f6-80b9-42dfef957a45,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b96d7dd36c70dfdd23967d114fa64e7431ab0f79cc1035c1d755e3d694374f0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:52:06.262757 kubelet[2763]: E0317 18:52:06.260052 2763 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b96d7dd36c70dfdd23967d114fa64e7431ab0f79cc1035c1d755e3d694374f0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:52:06.262757 kubelet[2763]: E0317 18:52:06.260127 2763 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b96d7dd36c70dfdd23967d114fa64e7431ab0f79cc1035c1d755e3d694374f0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-556dcdff49-tlrdg" Mar 17 18:52:06.262757 kubelet[2763]: E0317 18:52:06.260151 2763 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b96d7dd36c70dfdd23967d114fa64e7431ab0f79cc1035c1d755e3d694374f0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-556dcdff49-tlrdg" Mar 17 18:52:06.262959 kubelet[2763]: E0317 18:52:06.260204 2763 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-556dcdff49-tlrdg_calico-system(65d68753-4b96-44f6-80b9-42dfef957a45)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-556dcdff49-tlrdg_calico-system(65d68753-4b96-44f6-80b9-42dfef957a45)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b96d7dd36c70dfdd23967d114fa64e7431ab0f79cc1035c1d755e3d694374f0a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-556dcdff49-tlrdg" podUID="65d68753-4b96-44f6-80b9-42dfef957a45" Mar 17 18:52:06.275025 env[1566]: time="2025-03-17T18:52:06.274950490Z" level=error msg="Failed to destroy network for sandbox \"33e15b3c35dd3294ddfb7b5348fe3b3ff1444839c738b7c49c8236f0cae39e6a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:52:06.275621 env[1566]: time="2025-03-17T18:52:06.275584515Z" level=error msg="encountered an error cleaning up failed sandbox \"33e15b3c35dd3294ddfb7b5348fe3b3ff1444839c738b7c49c8236f0cae39e6a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:52:06.275789 env[1566]: time="2025-03-17T18:52:06.275752031Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b85fdb584-rpw9l,Uid:7edfd8b5-e741-4ba4-b5a0-38f1929fe58b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"33e15b3c35dd3294ddfb7b5348fe3b3ff1444839c738b7c49c8236f0cae39e6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:52:06.277918 kubelet[2763]: E0317 18:52:06.276386 2763 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33e15b3c35dd3294ddfb7b5348fe3b3ff1444839c738b7c49c8236f0cae39e6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:52:06.277918 kubelet[2763]: E0317 18:52:06.276456 2763 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33e15b3c35dd3294ddfb7b5348fe3b3ff1444839c738b7c49c8236f0cae39e6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b85fdb584-rpw9l" Mar 17 18:52:06.277918 kubelet[2763]: E0317 18:52:06.276476 2763 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33e15b3c35dd3294ddfb7b5348fe3b3ff1444839c738b7c49c8236f0cae39e6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b85fdb584-rpw9l" Mar 17 18:52:06.278083 kubelet[2763]: E0317 18:52:06.276541 2763 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7b85fdb584-rpw9l_calico-apiserver(7edfd8b5-e741-4ba4-b5a0-38f1929fe58b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7b85fdb584-rpw9l_calico-apiserver(7edfd8b5-e741-4ba4-b5a0-38f1929fe58b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"33e15b3c35dd3294ddfb7b5348fe3b3ff1444839c738b7c49c8236f0cae39e6a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b85fdb584-rpw9l" podUID="7edfd8b5-e741-4ba4-b5a0-38f1929fe58b" Mar 17 18:52:06.293403 env[1566]: time="2025-03-17T18:52:06.293342018Z" level=error msg="Failed to destroy network for sandbox \"faca32c2f27d05cb4b0fa73fb116cda5009efcfdafd08b20b4d6363289d31b68\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:52:06.293984 env[1566]: time="2025-03-17T18:52:06.293936404Z" level=error msg="encountered an error cleaning up failed sandbox \"faca32c2f27d05cb4b0fa73fb116cda5009efcfdafd08b20b4d6363289d31b68\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:52:06.294128 env[1566]: time="2025-03-17T18:52:06.294096481Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b85fdb584-4542j,Uid:3da68d2e-dbd2-4641-b6b6-c1e92b514d10,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"faca32c2f27d05cb4b0fa73fb116cda5009efcfdafd08b20b4d6363289d31b68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:52:06.294668 kubelet[2763]: E0317 18:52:06.294505 2763 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"faca32c2f27d05cb4b0fa73fb116cda5009efcfdafd08b20b4d6363289d31b68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:52:06.294668 kubelet[2763]: E0317 18:52:06.294592 2763 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"faca32c2f27d05cb4b0fa73fb116cda5009efcfdafd08b20b4d6363289d31b68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b85fdb584-4542j" Mar 17 18:52:06.294668 kubelet[2763]: E0317 18:52:06.294612 2763 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"faca32c2f27d05cb4b0fa73fb116cda5009efcfdafd08b20b4d6363289d31b68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b85fdb584-4542j" Mar 17 18:52:06.295847 kubelet[2763]: E0317 18:52:06.294835 2763 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7b85fdb584-4542j_calico-apiserver(3da68d2e-dbd2-4641-b6b6-c1e92b514d10)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7b85fdb584-4542j_calico-apiserver(3da68d2e-dbd2-4641-b6b6-c1e92b514d10)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"faca32c2f27d05cb4b0fa73fb116cda5009efcfdafd08b20b4d6363289d31b68\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b85fdb584-4542j" podUID="3da68d2e-dbd2-4641-b6b6-c1e92b514d10" Mar 17 18:52:06.534945 env[1566]: time="2025-03-17T18:52:06.534226964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g7tkt,Uid:b729951e-7fde-40b6-a0f1-675d7c66febf,Namespace:calico-system,Attempt:0,}" Mar 17 18:52:06.608916 kubelet[2763]: I0317 18:52:06.608065 2763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33e15b3c35dd3294ddfb7b5348fe3b3ff1444839c738b7c49c8236f0cae39e6a" Mar 17 18:52:06.610041 env[1566]: time="2025-03-17T18:52:06.609014529Z" level=info msg="StopPodSandbox for \"33e15b3c35dd3294ddfb7b5348fe3b3ff1444839c738b7c49c8236f0cae39e6a\"" Mar 17 18:52:06.611733 kubelet[2763]: I0317 18:52:06.611701 2763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="faca32c2f27d05cb4b0fa73fb116cda5009efcfdafd08b20b4d6363289d31b68" Mar 17 18:52:06.614258 env[1566]: time="2025-03-17T18:52:06.612705083Z" level=info msg="StopPodSandbox for \"faca32c2f27d05cb4b0fa73fb116cda5009efcfdafd08b20b4d6363289d31b68\"" Mar 17 18:52:06.615833 kubelet[2763]: I0317 18:52:06.615800 2763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0a1c8f1e37e1b7623414e7048fba3842698a32a380236544763d85e6b3a6e6c" Mar 17 18:52:06.616962 env[1566]: time="2025-03-17T18:52:06.616930143Z" level=info msg="StopPodSandbox for \"e0a1c8f1e37e1b7623414e7048fba3842698a32a380236544763d85e6b3a6e6c\"" Mar 17 18:52:06.619071 kubelet[2763]: I0317 18:52:06.618508 2763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e065047c826c98b68e6041689fa20f77b100c59a650cc57df87ad96e87c51334" Mar 17 18:52:06.619207 env[1566]: time="2025-03-17T18:52:06.619165851Z" level=info msg="StopPodSandbox for \"e065047c826c98b68e6041689fa20f77b100c59a650cc57df87ad96e87c51334\"" Mar 17 18:52:06.623461 kubelet[2763]: I0317 18:52:06.622611 2763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b96d7dd36c70dfdd23967d114fa64e7431ab0f79cc1035c1d755e3d694374f0a" Mar 17 18:52:06.633853 env[1566]: time="2025-03-17T18:52:06.624502486Z" level=info msg="StopPodSandbox for \"b96d7dd36c70dfdd23967d114fa64e7431ab0f79cc1035c1d755e3d694374f0a\"" Mar 17 18:52:06.638950 env[1566]: time="2025-03-17T18:52:06.638861029Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Mar 17 18:52:06.676114 env[1566]: time="2025-03-17T18:52:06.676064715Z" level=error msg="Failed to destroy network for sandbox \"c952186de7390f865d8d07eb798aac8ce4b67164b86f2f0bc0e8c4d4b23251fa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:52:06.678361 env[1566]: time="2025-03-17T18:52:06.678311743Z" level=error msg="encountered an error cleaning up failed sandbox \"c952186de7390f865d8d07eb798aac8ce4b67164b86f2f0bc0e8c4d4b23251fa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:52:06.678898 env[1566]: time="2025-03-17T18:52:06.678814011Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g7tkt,Uid:b729951e-7fde-40b6-a0f1-675d7c66febf,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c952186de7390f865d8d07eb798aac8ce4b67164b86f2f0bc0e8c4d4b23251fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:52:06.679102 kubelet[2763]: E0317 18:52:06.679060 2763 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c952186de7390f865d8d07eb798aac8ce4b67164b86f2f0bc0e8c4d4b23251fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:52:06.679181 kubelet[2763]: E0317 18:52:06.679121 2763 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c952186de7390f865d8d07eb798aac8ce4b67164b86f2f0bc0e8c4d4b23251fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-g7tkt" Mar 17 18:52:06.679181 kubelet[2763]: E0317 18:52:06.679150 2763 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c952186de7390f865d8d07eb798aac8ce4b67164b86f2f0bc0e8c4d4b23251fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-g7tkt" Mar 17 18:52:06.679248 kubelet[2763]: E0317 18:52:06.679187 2763 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-g7tkt_calico-system(b729951e-7fde-40b6-a0f1-675d7c66febf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-g7tkt_calico-system(b729951e-7fde-40b6-a0f1-675d7c66febf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c952186de7390f865d8d07eb798aac8ce4b67164b86f2f0bc0e8c4d4b23251fa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-g7tkt" podUID="b729951e-7fde-40b6-a0f1-675d7c66febf" Mar 17 18:52:06.713782 env[1566]: time="2025-03-17T18:52:06.713717352Z" level=error msg="StopPodSandbox for \"e0a1c8f1e37e1b7623414e7048fba3842698a32a380236544763d85e6b3a6e6c\" failed" error="failed to destroy network for sandbox \"e0a1c8f1e37e1b7623414e7048fba3842698a32a380236544763d85e6b3a6e6c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:52:06.720338 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e0a1c8f1e37e1b7623414e7048fba3842698a32a380236544763d85e6b3a6e6c-shm.mount: Deactivated successfully. Mar 17 18:52:06.720519 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b96d7dd36c70dfdd23967d114fa64e7431ab0f79cc1035c1d755e3d694374f0a-shm.mount: Deactivated successfully. Mar 17 18:52:06.727407 kubelet[2763]: E0317 18:52:06.726681 2763 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e0a1c8f1e37e1b7623414e7048fba3842698a32a380236544763d85e6b3a6e6c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e0a1c8f1e37e1b7623414e7048fba3842698a32a380236544763d85e6b3a6e6c" Mar 17 18:52:06.727407 kubelet[2763]: E0317 18:52:06.726773 2763 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e0a1c8f1e37e1b7623414e7048fba3842698a32a380236544763d85e6b3a6e6c"} Mar 17 18:52:06.727407 kubelet[2763]: E0317 18:52:06.726836 2763 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c8147012-69db-48aa-96fa-b27553fe56b8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e0a1c8f1e37e1b7623414e7048fba3842698a32a380236544763d85e6b3a6e6c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 17 18:52:06.727407 kubelet[2763]: E0317 18:52:06.726857 2763 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c8147012-69db-48aa-96fa-b27553fe56b8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e0a1c8f1e37e1b7623414e7048fba3842698a32a380236544763d85e6b3a6e6c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-wlmhz" podUID="c8147012-69db-48aa-96fa-b27553fe56b8" Mar 17 18:52:06.727700 env[1566]: time="2025-03-17T18:52:06.727538067Z" level=error msg="StopPodSandbox for \"33e15b3c35dd3294ddfb7b5348fe3b3ff1444839c738b7c49c8236f0cae39e6a\" failed" error="failed to destroy network for sandbox \"33e15b3c35dd3294ddfb7b5348fe3b3ff1444839c738b7c49c8236f0cae39e6a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:52:06.728047 kubelet[2763]: E0317 18:52:06.727860 2763 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"33e15b3c35dd3294ddfb7b5348fe3b3ff1444839c738b7c49c8236f0cae39e6a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="33e15b3c35dd3294ddfb7b5348fe3b3ff1444839c738b7c49c8236f0cae39e6a" Mar 17 18:52:06.728047 kubelet[2763]: E0317 18:52:06.727946 2763 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"33e15b3c35dd3294ddfb7b5348fe3b3ff1444839c738b7c49c8236f0cae39e6a"} Mar 17 18:52:06.728047 kubelet[2763]: E0317 18:52:06.727982 2763 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7edfd8b5-e741-4ba4-b5a0-38f1929fe58b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"33e15b3c35dd3294ddfb7b5348fe3b3ff1444839c738b7c49c8236f0cae39e6a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 17 18:52:06.728047 kubelet[2763]: E0317 18:52:06.728016 2763 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7edfd8b5-e741-4ba4-b5a0-38f1929fe58b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"33e15b3c35dd3294ddfb7b5348fe3b3ff1444839c738b7c49c8236f0cae39e6a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b85fdb584-rpw9l" podUID="7edfd8b5-e741-4ba4-b5a0-38f1929fe58b" Mar 17 18:52:06.735198 env[1566]: time="2025-03-17T18:52:06.735127209Z" level=error msg="StopPodSandbox for \"e065047c826c98b68e6041689fa20f77b100c59a650cc57df87ad96e87c51334\" failed" error="failed to destroy network for sandbox \"e065047c826c98b68e6041689fa20f77b100c59a650cc57df87ad96e87c51334\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:52:06.735714 kubelet[2763]: E0317 18:52:06.735664 2763 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e065047c826c98b68e6041689fa20f77b100c59a650cc57df87ad96e87c51334\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e065047c826c98b68e6041689fa20f77b100c59a650cc57df87ad96e87c51334" Mar 17 18:52:06.735796 kubelet[2763]: E0317 18:52:06.735726 2763 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e065047c826c98b68e6041689fa20f77b100c59a650cc57df87ad96e87c51334"} Mar 17 18:52:06.735796 kubelet[2763]: E0317 18:52:06.735771 2763 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3418e3de-8e64-4cf5-99b4-25c5564ac718\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e065047c826c98b68e6041689fa20f77b100c59a650cc57df87ad96e87c51334\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 17 18:52:06.735916 kubelet[2763]: E0317 18:52:06.735793 2763 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3418e3de-8e64-4cf5-99b4-25c5564ac718\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e065047c826c98b68e6041689fa20f77b100c59a650cc57df87ad96e87c51334\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-d5v52" podUID="3418e3de-8e64-4cf5-99b4-25c5564ac718" Mar 17 18:52:06.749823 env[1566]: time="2025-03-17T18:52:06.749734866Z" level=error msg="StopPodSandbox for \"b96d7dd36c70dfdd23967d114fa64e7431ab0f79cc1035c1d755e3d694374f0a\" failed" error="failed to destroy network for sandbox \"b96d7dd36c70dfdd23967d114fa64e7431ab0f79cc1035c1d755e3d694374f0a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:52:06.750388 kubelet[2763]: E0317 18:52:06.750205 2763 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b96d7dd36c70dfdd23967d114fa64e7431ab0f79cc1035c1d755e3d694374f0a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b96d7dd36c70dfdd23967d114fa64e7431ab0f79cc1035c1d755e3d694374f0a" Mar 17 18:52:06.750388 kubelet[2763]: E0317 18:52:06.750273 2763 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b96d7dd36c70dfdd23967d114fa64e7431ab0f79cc1035c1d755e3d694374f0a"} Mar 17 18:52:06.750388 kubelet[2763]: E0317 18:52:06.750325 2763 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"65d68753-4b96-44f6-80b9-42dfef957a45\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b96d7dd36c70dfdd23967d114fa64e7431ab0f79cc1035c1d755e3d694374f0a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 17 18:52:06.750388 kubelet[2763]: E0317 18:52:06.750351 2763 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"65d68753-4b96-44f6-80b9-42dfef957a45\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b96d7dd36c70dfdd23967d114fa64e7431ab0f79cc1035c1d755e3d694374f0a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-556dcdff49-tlrdg" podUID="65d68753-4b96-44f6-80b9-42dfef957a45" Mar 17 18:52:06.752039 env[1566]: time="2025-03-17T18:52:06.751982573Z" level=error msg="StopPodSandbox for \"faca32c2f27d05cb4b0fa73fb116cda5009efcfdafd08b20b4d6363289d31b68\" failed" error="failed to destroy network for sandbox \"faca32c2f27d05cb4b0fa73fb116cda5009efcfdafd08b20b4d6363289d31b68\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:52:06.752702 kubelet[2763]: E0317 18:52:06.752506 2763 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"faca32c2f27d05cb4b0fa73fb116cda5009efcfdafd08b20b4d6363289d31b68\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="faca32c2f27d05cb4b0fa73fb116cda5009efcfdafd08b20b4d6363289d31b68" Mar 17 18:52:06.752702 kubelet[2763]: E0317 18:52:06.752577 2763 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"faca32c2f27d05cb4b0fa73fb116cda5009efcfdafd08b20b4d6363289d31b68"} Mar 17 18:52:06.752702 kubelet[2763]: E0317 18:52:06.752622 2763 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3da68d2e-dbd2-4641-b6b6-c1e92b514d10\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"faca32c2f27d05cb4b0fa73fb116cda5009efcfdafd08b20b4d6363289d31b68\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 17 18:52:06.752702 kubelet[2763]: E0317 18:52:06.752649 2763 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3da68d2e-dbd2-4641-b6b6-c1e92b514d10\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"faca32c2f27d05cb4b0fa73fb116cda5009efcfdafd08b20b4d6363289d31b68\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b85fdb584-4542j" podUID="3da68d2e-dbd2-4641-b6b6-c1e92b514d10" Mar 17 18:52:07.626013 kubelet[2763]: I0317 18:52:07.625968 2763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c952186de7390f865d8d07eb798aac8ce4b67164b86f2f0bc0e8c4d4b23251fa" Mar 17 18:52:07.627911 env[1566]: time="2025-03-17T18:52:07.627241556Z" level=info msg="StopPodSandbox for \"c952186de7390f865d8d07eb798aac8ce4b67164b86f2f0bc0e8c4d4b23251fa\"" Mar 17 18:52:07.657639 env[1566]: time="2025-03-17T18:52:07.657577254Z" level=error msg="StopPodSandbox for \"c952186de7390f865d8d07eb798aac8ce4b67164b86f2f0bc0e8c4d4b23251fa\" failed" error="failed to destroy network for sandbox \"c952186de7390f865d8d07eb798aac8ce4b67164b86f2f0bc0e8c4d4b23251fa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:52:07.658087 kubelet[2763]: E0317 18:52:07.658045 2763 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c952186de7390f865d8d07eb798aac8ce4b67164b86f2f0bc0e8c4d4b23251fa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c952186de7390f865d8d07eb798aac8ce4b67164b86f2f0bc0e8c4d4b23251fa" Mar 17 18:52:07.658176 kubelet[2763]: E0317 18:52:07.658107 2763 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c952186de7390f865d8d07eb798aac8ce4b67164b86f2f0bc0e8c4d4b23251fa"} Mar 17 18:52:07.658176 kubelet[2763]: E0317 18:52:07.658142 2763 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b729951e-7fde-40b6-a0f1-675d7c66febf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c952186de7390f865d8d07eb798aac8ce4b67164b86f2f0bc0e8c4d4b23251fa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 17 18:52:07.658260 kubelet[2763]: E0317 18:52:07.658183 2763 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b729951e-7fde-40b6-a0f1-675d7c66febf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c952186de7390f865d8d07eb798aac8ce4b67164b86f2f0bc0e8c4d4b23251fa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-g7tkt" podUID="b729951e-7fde-40b6-a0f1-675d7c66febf" Mar 17 18:52:12.043272 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2848882195.mount: Deactivated successfully. Mar 17 18:52:12.433554 env[1566]: time="2025-03-17T18:52:12.433431826Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:52:12.447179 env[1566]: time="2025-03-17T18:52:12.447126730Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:52:12.453100 env[1566]: time="2025-03-17T18:52:12.453053602Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:52:12.460267 env[1566]: time="2025-03-17T18:52:12.460205967Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:52:12.461064 env[1566]: time="2025-03-17T18:52:12.461030109Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Mar 17 18:52:12.478746 env[1566]: time="2025-03-17T18:52:12.478699527Z" level=info msg="CreateContainer within sandbox \"446e4191b6f76f3ed1a157c98d804ff4da02901442216400716f23ffae83cc20\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 17 18:52:12.544559 env[1566]: time="2025-03-17T18:52:12.544499302Z" level=info msg="CreateContainer within sandbox \"446e4191b6f76f3ed1a157c98d804ff4da02901442216400716f23ffae83cc20\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c095753d191fcbffbc8b75aece1c7488e9f3213681409fbc4369c81a4d9ec146\"" Mar 17 18:52:12.547024 env[1566]: time="2025-03-17T18:52:12.546967929Z" level=info msg="StartContainer for \"c095753d191fcbffbc8b75aece1c7488e9f3213681409fbc4369c81a4d9ec146\"" Mar 17 18:52:12.607201 env[1566]: time="2025-03-17T18:52:12.607147426Z" level=info msg="StartContainer for \"c095753d191fcbffbc8b75aece1c7488e9f3213681409fbc4369c81a4d9ec146\" returns successfully" Mar 17 18:52:12.869462 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Mar 17 18:52:12.869630 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Mar 17 18:52:13.663457 systemd[1]: run-containerd-runc-k8s.io-c095753d191fcbffbc8b75aece1c7488e9f3213681409fbc4369c81a4d9ec146-runc.y4PdKQ.mount: Deactivated successfully. Mar 17 18:52:14.372000 audit[3968]: AVC avc: denied { write } for pid=3968 comm="tee" name="fd" dev="proc" ino=25718 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Mar 17 18:52:14.380206 kernel: kauditd_printk_skb: 14 callbacks suppressed Mar 17 18:52:14.380325 kernel: audit: type=1400 audit(1742237534.372:305): avc: denied { write } for pid=3968 comm="tee" name="fd" dev="proc" ino=25718 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Mar 17 18:52:14.372000 audit[3968]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffcd4d0a11 a2=241 a3=1b6 items=1 ppid=3934 pid=3968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:14.444670 kernel: audit: type=1300 audit(1742237534.372:305): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffcd4d0a11 a2=241 a3=1b6 items=1 ppid=3934 pid=3968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:14.372000 audit: CWD cwd="/etc/service/enabled/bird/log" Mar 17 18:52:14.455059 kernel: audit: type=1307 audit(1742237534.372:305): cwd="/etc/service/enabled/bird/log" Mar 17 18:52:14.372000 audit: PATH item=0 name="/dev/fd/63" inode=25707 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:52:14.481582 kernel: audit: type=1302 audit(1742237534.372:305): item=0 name="/dev/fd/63" inode=25707 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:52:14.481743 kernel: audit: type=1327 audit(1742237534.372:305): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Mar 17 18:52:14.372000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Mar 17 18:52:14.405000 audit[3985]: AVC avc: denied { write } for pid=3985 comm="tee" name="fd" dev="proc" ino=25746 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Mar 17 18:52:14.531658 kernel: audit: type=1400 audit(1742237534.405:306): avc: denied { write } for pid=3985 comm="tee" name="fd" dev="proc" ino=25746 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Mar 17 18:52:14.531809 kernel: audit: type=1300 audit(1742237534.405:306): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffcd058a10 a2=241 a3=1b6 items=1 ppid=3939 pid=3985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:14.405000 audit[3985]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffcd058a10 a2=241 a3=1b6 items=1 ppid=3939 pid=3985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:14.405000 audit: CWD cwd="/etc/service/enabled/confd/log" Mar 17 18:52:14.405000 audit: PATH item=0 name="/dev/fd/63" inode=25734 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:52:14.606788 kernel: audit: type=1307 audit(1742237534.405:306): cwd="/etc/service/enabled/confd/log" Mar 17 18:52:14.606929 kernel: audit: type=1302 audit(1742237534.405:306): item=0 name="/dev/fd/63" inode=25734 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:52:14.405000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Mar 17 18:52:14.629809 kernel: audit: type=1327 audit(1742237534.405:306): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Mar 17 18:52:14.407000 audit[3991]: AVC avc: denied { write } for pid=3991 comm="tee" name="fd" dev="proc" ino=25751 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Mar 17 18:52:14.407000 audit[3991]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffc4d25a12 a2=241 a3=1b6 items=1 ppid=3942 pid=3991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:14.407000 audit: CWD cwd="/etc/service/enabled/cni/log" Mar 17 18:52:14.407000 audit: PATH item=0 name="/dev/fd/63" inode=25739 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:52:14.407000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Mar 17 18:52:14.606000 audit[4001]: AVC avc: denied { write } for pid=4001 comm="tee" name="fd" dev="proc" ino=25794 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Mar 17 18:52:14.606000 audit[4001]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff4172a10 a2=241 a3=1b6 items=1 ppid=3946 pid=4001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:14.606000 audit: CWD cwd="/etc/service/enabled/felix/log" Mar 17 18:52:14.606000 audit: PATH item=0 name="/dev/fd/63" inode=25755 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:52:14.606000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Mar 17 18:52:14.612000 audit[3997]: AVC avc: denied { write } for pid=3997 comm="tee" name="fd" dev="proc" ino=25798 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Mar 17 18:52:14.612000 audit[3997]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffe5810a10 a2=241 a3=1b6 items=1 ppid=3943 pid=3997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:14.612000 audit: CWD cwd="/etc/service/enabled/bird6/log" Mar 17 18:52:14.612000 audit: PATH item=0 name="/dev/fd/63" inode=25748 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:52:14.612000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Mar 17 18:52:14.612000 audit[4008]: AVC avc: denied { write } for pid=4008 comm="tee" name="fd" dev="proc" ino=25805 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Mar 17 18:52:14.612000 audit[4008]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffdcb1ca01 a2=241 a3=1b6 items=1 ppid=3938 pid=4008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:14.612000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Mar 17 18:52:14.612000 audit: PATH item=0 name="/dev/fd/63" inode=25791 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:52:14.612000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Mar 17 18:52:14.617000 audit[4010]: AVC avc: denied { write } for pid=4010 comm="tee" name="fd" dev="proc" ino=25809 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Mar 17 18:52:14.617000 audit[4010]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffe230ea00 a2=241 a3=1b6 items=1 ppid=3936 pid=4010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:14.617000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Mar 17 18:52:14.617000 audit: PATH item=0 name="/dev/fd/63" inode=25802 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:52:14.617000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Mar 17 18:52:17.534271 env[1566]: time="2025-03-17T18:52:17.532888883Z" level=info msg="StopPodSandbox for \"e0a1c8f1e37e1b7623414e7048fba3842698a32a380236544763d85e6b3a6e6c\"" Mar 17 18:52:17.535192 env[1566]: time="2025-03-17T18:52:17.535157757Z" level=info msg="StopPodSandbox for \"33e15b3c35dd3294ddfb7b5348fe3b3ff1444839c738b7c49c8236f0cae39e6a\"" Mar 17 18:52:17.606578 kubelet[2763]: I0317 18:52:17.606505 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-5nqvl" podStartSLOduration=6.117578441 podStartE2EDuration="23.606486144s" podCreationTimestamp="2025-03-17 18:51:54 +0000 UTC" firstStartedPulling="2025-03-17 18:51:54.973599534 +0000 UTC m=+23.921803650" lastFinishedPulling="2025-03-17 18:52:12.462507277 +0000 UTC m=+41.410711353" observedRunningTime="2025-03-17 18:52:12.662357191 +0000 UTC m=+41.610561307" watchObservedRunningTime="2025-03-17 18:52:17.606486144 +0000 UTC m=+46.554690260" Mar 17 18:52:17.653165 env[1566]: 2025-03-17 18:52:17.616 [INFO][4086] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e0a1c8f1e37e1b7623414e7048fba3842698a32a380236544763d85e6b3a6e6c" Mar 17 18:52:17.653165 env[1566]: 2025-03-17 18:52:17.617 [INFO][4086] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e0a1c8f1e37e1b7623414e7048fba3842698a32a380236544763d85e6b3a6e6c" iface="eth0" netns="/var/run/netns/cni-85b53b73-661f-fd48-d537-ae7bb1df0ebd" Mar 17 18:52:17.653165 env[1566]: 2025-03-17 18:52:17.617 [INFO][4086] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e0a1c8f1e37e1b7623414e7048fba3842698a32a380236544763d85e6b3a6e6c" iface="eth0" netns="/var/run/netns/cni-85b53b73-661f-fd48-d537-ae7bb1df0ebd" Mar 17 18:52:17.653165 env[1566]: 2025-03-17 18:52:17.617 [INFO][4086] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e0a1c8f1e37e1b7623414e7048fba3842698a32a380236544763d85e6b3a6e6c" iface="eth0" netns="/var/run/netns/cni-85b53b73-661f-fd48-d537-ae7bb1df0ebd" Mar 17 18:52:17.653165 env[1566]: 2025-03-17 18:52:17.617 [INFO][4086] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e0a1c8f1e37e1b7623414e7048fba3842698a32a380236544763d85e6b3a6e6c" Mar 17 18:52:17.653165 env[1566]: 2025-03-17 18:52:17.617 [INFO][4086] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e0a1c8f1e37e1b7623414e7048fba3842698a32a380236544763d85e6b3a6e6c" Mar 17 18:52:17.653165 env[1566]: 2025-03-17 18:52:17.638 [INFO][4106] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e0a1c8f1e37e1b7623414e7048fba3842698a32a380236544763d85e6b3a6e6c" HandleID="k8s-pod-network.e0a1c8f1e37e1b7623414e7048fba3842698a32a380236544763d85e6b3a6e6c" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-coredns--7db6d8ff4d--wlmhz-eth0" Mar 17 18:52:17.653165 env[1566]: 2025-03-17 18:52:17.638 [INFO][4106] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:52:17.653165 env[1566]: 2025-03-17 18:52:17.638 [INFO][4106] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:52:17.653165 env[1566]: 2025-03-17 18:52:17.647 [WARNING][4106] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e0a1c8f1e37e1b7623414e7048fba3842698a32a380236544763d85e6b3a6e6c" HandleID="k8s-pod-network.e0a1c8f1e37e1b7623414e7048fba3842698a32a380236544763d85e6b3a6e6c" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-coredns--7db6d8ff4d--wlmhz-eth0" Mar 17 18:52:17.653165 env[1566]: 2025-03-17 18:52:17.647 [INFO][4106] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e0a1c8f1e37e1b7623414e7048fba3842698a32a380236544763d85e6b3a6e6c" HandleID="k8s-pod-network.e0a1c8f1e37e1b7623414e7048fba3842698a32a380236544763d85e6b3a6e6c" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-coredns--7db6d8ff4d--wlmhz-eth0" Mar 17 18:52:17.653165 env[1566]: 2025-03-17 18:52:17.649 [INFO][4106] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:52:17.653165 env[1566]: 2025-03-17 18:52:17.651 [INFO][4086] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e0a1c8f1e37e1b7623414e7048fba3842698a32a380236544763d85e6b3a6e6c" Mar 17 18:52:17.656167 systemd[1]: run-netns-cni\x2d85b53b73\x2d661f\x2dfd48\x2dd537\x2dae7bb1df0ebd.mount: Deactivated successfully. Mar 17 18:52:17.657829 env[1566]: time="2025-03-17T18:52:17.657417267Z" level=info msg="TearDown network for sandbox \"e0a1c8f1e37e1b7623414e7048fba3842698a32a380236544763d85e6b3a6e6c\" successfully" Mar 17 18:52:17.657829 env[1566]: time="2025-03-17T18:52:17.657464426Z" level=info msg="StopPodSandbox for \"e0a1c8f1e37e1b7623414e7048fba3842698a32a380236544763d85e6b3a6e6c\" returns successfully" Mar 17 18:52:17.658415 env[1566]: time="2025-03-17T18:52:17.658369847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wlmhz,Uid:c8147012-69db-48aa-96fa-b27553fe56b8,Namespace:kube-system,Attempt:1,}" Mar 17 18:52:17.727848 env[1566]: 2025-03-17 18:52:17.607 [INFO][4096] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="33e15b3c35dd3294ddfb7b5348fe3b3ff1444839c738b7c49c8236f0cae39e6a" Mar 17 18:52:17.727848 env[1566]: 2025-03-17 18:52:17.616 [INFO][4096] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="33e15b3c35dd3294ddfb7b5348fe3b3ff1444839c738b7c49c8236f0cae39e6a" iface="eth0" netns="/var/run/netns/cni-e7310a14-a24c-6cdf-6961-a4cd063b125b" Mar 17 18:52:17.727848 env[1566]: 2025-03-17 18:52:17.617 [INFO][4096] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="33e15b3c35dd3294ddfb7b5348fe3b3ff1444839c738b7c49c8236f0cae39e6a" iface="eth0" netns="/var/run/netns/cni-e7310a14-a24c-6cdf-6961-a4cd063b125b" Mar 17 18:52:17.727848 env[1566]: 2025-03-17 18:52:17.617 [INFO][4096] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="33e15b3c35dd3294ddfb7b5348fe3b3ff1444839c738b7c49c8236f0cae39e6a" iface="eth0" netns="/var/run/netns/cni-e7310a14-a24c-6cdf-6961-a4cd063b125b" Mar 17 18:52:17.727848 env[1566]: 2025-03-17 18:52:17.617 [INFO][4096] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="33e15b3c35dd3294ddfb7b5348fe3b3ff1444839c738b7c49c8236f0cae39e6a" Mar 17 18:52:17.727848 env[1566]: 2025-03-17 18:52:17.617 [INFO][4096] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="33e15b3c35dd3294ddfb7b5348fe3b3ff1444839c738b7c49c8236f0cae39e6a" Mar 17 18:52:17.727848 env[1566]: 2025-03-17 18:52:17.660 [INFO][4104] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="33e15b3c35dd3294ddfb7b5348fe3b3ff1444839c738b7c49c8236f0cae39e6a" HandleID="k8s-pod-network.33e15b3c35dd3294ddfb7b5348fe3b3ff1444839c738b7c49c8236f0cae39e6a" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-calico--apiserver--7b85fdb584--rpw9l-eth0" Mar 17 18:52:17.727848 env[1566]: 2025-03-17 18:52:17.661 [INFO][4104] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:52:17.727848 env[1566]: 2025-03-17 18:52:17.661 [INFO][4104] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:52:17.727848 env[1566]: 2025-03-17 18:52:17.701 [WARNING][4104] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="33e15b3c35dd3294ddfb7b5348fe3b3ff1444839c738b7c49c8236f0cae39e6a" HandleID="k8s-pod-network.33e15b3c35dd3294ddfb7b5348fe3b3ff1444839c738b7c49c8236f0cae39e6a" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-calico--apiserver--7b85fdb584--rpw9l-eth0" Mar 17 18:52:17.727848 env[1566]: 2025-03-17 18:52:17.701 [INFO][4104] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="33e15b3c35dd3294ddfb7b5348fe3b3ff1444839c738b7c49c8236f0cae39e6a" HandleID="k8s-pod-network.33e15b3c35dd3294ddfb7b5348fe3b3ff1444839c738b7c49c8236f0cae39e6a" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-calico--apiserver--7b85fdb584--rpw9l-eth0" Mar 17 18:52:17.727848 env[1566]: 2025-03-17 18:52:17.703 [INFO][4104] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:52:17.727848 env[1566]: 2025-03-17 18:52:17.717 [INFO][4096] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="33e15b3c35dd3294ddfb7b5348fe3b3ff1444839c738b7c49c8236f0cae39e6a" Mar 17 18:52:17.732840 systemd[1]: run-netns-cni\x2de7310a14\x2da24c\x2d6cdf\x2d6961\x2da4cd063b125b.mount: Deactivated successfully. Mar 17 18:52:17.734138 env[1566]: time="2025-03-17T18:52:17.733915589Z" level=info msg="TearDown network for sandbox \"33e15b3c35dd3294ddfb7b5348fe3b3ff1444839c738b7c49c8236f0cae39e6a\" successfully" Mar 17 18:52:17.734138 env[1566]: time="2025-03-17T18:52:17.733958108Z" level=info msg="StopPodSandbox for \"33e15b3c35dd3294ddfb7b5348fe3b3ff1444839c738b7c49c8236f0cae39e6a\" returns successfully" Mar 17 18:52:17.734654 env[1566]: time="2025-03-17T18:52:17.734620854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b85fdb584-rpw9l,Uid:7edfd8b5-e741-4ba4-b5a0-38f1929fe58b,Namespace:calico-apiserver,Attempt:1,}" Mar 17 18:52:17.966724 systemd-networkd[1734]: calid25cc6a53dd: Link UP Mar 17 18:52:17.981227 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Mar 17 18:52:17.981355 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calid25cc6a53dd: link becomes ready Mar 17 18:52:17.983162 systemd-networkd[1734]: calid25cc6a53dd: Gained carrier Mar 17 18:52:18.004025 env[1566]: 2025-03-17 18:52:17.769 [INFO][4118] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 18:52:18.004025 env[1566]: 2025-03-17 18:52:17.793 [INFO][4118] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.7--a--c36c8d7be6-k8s-coredns--7db6d8ff4d--wlmhz-eth0 coredns-7db6d8ff4d- kube-system c8147012-69db-48aa-96fa-b27553fe56b8 752 0 2025-03-17 18:51:45 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510.3.7-a-c36c8d7be6 coredns-7db6d8ff4d-wlmhz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid25cc6a53dd [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="2fdd34f9093c7595045e99a62731d64cc116d576b43a0acac58cc17828880952" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wlmhz" WorkloadEndpoint="ci--3510.3.7--a--c36c8d7be6-k8s-coredns--7db6d8ff4d--wlmhz-" Mar 17 18:52:18.004025 env[1566]: 2025-03-17 18:52:17.794 [INFO][4118] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2fdd34f9093c7595045e99a62731d64cc116d576b43a0acac58cc17828880952" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wlmhz" WorkloadEndpoint="ci--3510.3.7--a--c36c8d7be6-k8s-coredns--7db6d8ff4d--wlmhz-eth0" Mar 17 18:52:18.004025 env[1566]: 2025-03-17 18:52:17.878 [INFO][4142] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2fdd34f9093c7595045e99a62731d64cc116d576b43a0acac58cc17828880952" HandleID="k8s-pod-network.2fdd34f9093c7595045e99a62731d64cc116d576b43a0acac58cc17828880952" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-coredns--7db6d8ff4d--wlmhz-eth0" Mar 17 18:52:18.004025 env[1566]: 2025-03-17 18:52:17.896 [INFO][4142] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2fdd34f9093c7595045e99a62731d64cc116d576b43a0acac58cc17828880952" HandleID="k8s-pod-network.2fdd34f9093c7595045e99a62731d64cc116d576b43a0acac58cc17828880952" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-coredns--7db6d8ff4d--wlmhz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004cd50), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510.3.7-a-c36c8d7be6", "pod":"coredns-7db6d8ff4d-wlmhz", "timestamp":"2025-03-17 18:52:17.878276168 +0000 UTC"}, Hostname:"ci-3510.3.7-a-c36c8d7be6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 18:52:18.004025 env[1566]: 2025-03-17 18:52:17.896 [INFO][4142] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:52:18.004025 env[1566]: 2025-03-17 18:52:17.896 [INFO][4142] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:52:18.004025 env[1566]: 2025-03-17 18:52:17.896 [INFO][4142] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.7-a-c36c8d7be6' Mar 17 18:52:18.004025 env[1566]: 2025-03-17 18:52:17.898 [INFO][4142] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2fdd34f9093c7595045e99a62731d64cc116d576b43a0acac58cc17828880952" host="ci-3510.3.7-a-c36c8d7be6" Mar 17 18:52:18.004025 env[1566]: 2025-03-17 18:52:17.904 [INFO][4142] ipam/ipam.go 372: Looking up existing affinities for host host="ci-3510.3.7-a-c36c8d7be6" Mar 17 18:52:18.004025 env[1566]: 2025-03-17 18:52:17.909 [INFO][4142] ipam/ipam.go 489: Trying affinity for 192.168.10.128/26 host="ci-3510.3.7-a-c36c8d7be6" Mar 17 18:52:18.004025 env[1566]: 2025-03-17 18:52:17.911 [INFO][4142] ipam/ipam.go 155: Attempting to load block cidr=192.168.10.128/26 host="ci-3510.3.7-a-c36c8d7be6" Mar 17 18:52:18.004025 env[1566]: 2025-03-17 18:52:17.914 [INFO][4142] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.10.128/26 host="ci-3510.3.7-a-c36c8d7be6" Mar 17 18:52:18.004025 env[1566]: 2025-03-17 18:52:17.914 [INFO][4142] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.10.128/26 handle="k8s-pod-network.2fdd34f9093c7595045e99a62731d64cc116d576b43a0acac58cc17828880952" host="ci-3510.3.7-a-c36c8d7be6" Mar 17 18:52:18.004025 env[1566]: 2025-03-17 18:52:17.916 [INFO][4142] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2fdd34f9093c7595045e99a62731d64cc116d576b43a0acac58cc17828880952 Mar 17 18:52:18.004025 env[1566]: 2025-03-17 18:52:17.926 [INFO][4142] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.10.128/26 handle="k8s-pod-network.2fdd34f9093c7595045e99a62731d64cc116d576b43a0acac58cc17828880952" host="ci-3510.3.7-a-c36c8d7be6" Mar 17 18:52:18.004025 env[1566]: 2025-03-17 18:52:17.938 [INFO][4142] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.10.129/26] block=192.168.10.128/26 handle="k8s-pod-network.2fdd34f9093c7595045e99a62731d64cc116d576b43a0acac58cc17828880952" host="ci-3510.3.7-a-c36c8d7be6" Mar 17 18:52:18.004025 env[1566]: 2025-03-17 18:52:17.938 [INFO][4142] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.10.129/26] handle="k8s-pod-network.2fdd34f9093c7595045e99a62731d64cc116d576b43a0acac58cc17828880952" host="ci-3510.3.7-a-c36c8d7be6" Mar 17 18:52:18.004025 env[1566]: 2025-03-17 18:52:17.938 [INFO][4142] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:52:18.004025 env[1566]: 2025-03-17 18:52:17.938 [INFO][4142] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.10.129/26] IPv6=[] ContainerID="2fdd34f9093c7595045e99a62731d64cc116d576b43a0acac58cc17828880952" HandleID="k8s-pod-network.2fdd34f9093c7595045e99a62731d64cc116d576b43a0acac58cc17828880952" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-coredns--7db6d8ff4d--wlmhz-eth0" Mar 17 18:52:18.004601 env[1566]: 2025-03-17 18:52:17.940 [INFO][4118] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2fdd34f9093c7595045e99a62731d64cc116d576b43a0acac58cc17828880952" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wlmhz" WorkloadEndpoint="ci--3510.3.7--a--c36c8d7be6-k8s-coredns--7db6d8ff4d--wlmhz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--a--c36c8d7be6-k8s-coredns--7db6d8ff4d--wlmhz-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c8147012-69db-48aa-96fa-b27553fe56b8", ResourceVersion:"752", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 51, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-a-c36c8d7be6", ContainerID:"", Pod:"coredns-7db6d8ff4d-wlmhz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.10.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid25cc6a53dd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:52:18.004601 env[1566]: 2025-03-17 18:52:17.940 [INFO][4118] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.10.129/32] ContainerID="2fdd34f9093c7595045e99a62731d64cc116d576b43a0acac58cc17828880952" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wlmhz" WorkloadEndpoint="ci--3510.3.7--a--c36c8d7be6-k8s-coredns--7db6d8ff4d--wlmhz-eth0" Mar 17 18:52:18.004601 env[1566]: 2025-03-17 18:52:17.940 [INFO][4118] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid25cc6a53dd ContainerID="2fdd34f9093c7595045e99a62731d64cc116d576b43a0acac58cc17828880952" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wlmhz" WorkloadEndpoint="ci--3510.3.7--a--c36c8d7be6-k8s-coredns--7db6d8ff4d--wlmhz-eth0" Mar 17 18:52:18.004601 env[1566]: 2025-03-17 18:52:17.985 [INFO][4118] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2fdd34f9093c7595045e99a62731d64cc116d576b43a0acac58cc17828880952" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wlmhz" WorkloadEndpoint="ci--3510.3.7--a--c36c8d7be6-k8s-coredns--7db6d8ff4d--wlmhz-eth0" Mar 17 18:52:18.004601 env[1566]: 2025-03-17 18:52:17.985 [INFO][4118] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2fdd34f9093c7595045e99a62731d64cc116d576b43a0acac58cc17828880952" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wlmhz" WorkloadEndpoint="ci--3510.3.7--a--c36c8d7be6-k8s-coredns--7db6d8ff4d--wlmhz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--a--c36c8d7be6-k8s-coredns--7db6d8ff4d--wlmhz-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c8147012-69db-48aa-96fa-b27553fe56b8", ResourceVersion:"752", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 51, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-a-c36c8d7be6", ContainerID:"2fdd34f9093c7595045e99a62731d64cc116d576b43a0acac58cc17828880952", Pod:"coredns-7db6d8ff4d-wlmhz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.10.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid25cc6a53dd", MAC:"36:4c:50:22:84:e2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:52:18.004601 env[1566]: 2025-03-17 18:52:17.998 [INFO][4118] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2fdd34f9093c7595045e99a62731d64cc116d576b43a0acac58cc17828880952" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wlmhz" WorkloadEndpoint="ci--3510.3.7--a--c36c8d7be6-k8s-coredns--7db6d8ff4d--wlmhz-eth0" Mar 17 18:52:18.027604 env[1566]: time="2025-03-17T18:52:18.027522575Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:52:18.027604 env[1566]: time="2025-03-17T18:52:18.027568934Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:52:18.027914 env[1566]: time="2025-03-17T18:52:18.027580533Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:52:18.028044 env[1566]: time="2025-03-17T18:52:18.027860728Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2fdd34f9093c7595045e99a62731d64cc116d576b43a0acac58cc17828880952 pid=4190 runtime=io.containerd.runc.v2 Mar 17 18:52:18.078889 env[1566]: time="2025-03-17T18:52:18.078820542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wlmhz,Uid:c8147012-69db-48aa-96fa-b27553fe56b8,Namespace:kube-system,Attempt:1,} returns sandbox id \"2fdd34f9093c7595045e99a62731d64cc116d576b43a0acac58cc17828880952\"" Mar 17 18:52:18.088466 env[1566]: time="2025-03-17T18:52:18.088420668Z" level=info msg="CreateContainer within sandbox \"2fdd34f9093c7595045e99a62731d64cc116d576b43a0acac58cc17828880952\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 18:52:18.132181 systemd-networkd[1734]: calic078a3256b2: Link UP Mar 17 18:52:18.134117 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calic078a3256b2: link becomes ready Mar 17 18:52:18.133921 systemd-networkd[1734]: calic078a3256b2: Gained carrier Mar 17 18:52:18.161836 env[1566]: time="2025-03-17T18:52:18.161748552Z" level=info msg="CreateContainer within sandbox \"2fdd34f9093c7595045e99a62731d64cc116d576b43a0acac58cc17828880952\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"168fcdca4fc606882d682784859c4143cdcf9a7a50452ecb031dd6d7ee617b23\"" Mar 17 18:52:18.162646 env[1566]: time="2025-03-17T18:52:18.162614294Z" level=info msg="StartContainer for \"168fcdca4fc606882d682784859c4143cdcf9a7a50452ecb031dd6d7ee617b23\"" Mar 17 18:52:18.222680 env[1566]: 2025-03-17 18:52:17.869 [INFO][4140] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 18:52:18.222680 env[1566]: 2025-03-17 18:52:17.889 [INFO][4140] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.7--a--c36c8d7be6-k8s-calico--apiserver--7b85fdb584--rpw9l-eth0 calico-apiserver-7b85fdb584- calico-apiserver 7edfd8b5-e741-4ba4-b5a0-38f1929fe58b 751 0 2025-03-17 18:51:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7b85fdb584 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510.3.7-a-c36c8d7be6 calico-apiserver-7b85fdb584-rpw9l eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic078a3256b2 [] []}} ContainerID="58da4f6e5fc27c8cee99933b7e9aaf356eed0af0485adfed2473be01b66d9934" Namespace="calico-apiserver" Pod="calico-apiserver-7b85fdb584-rpw9l" WorkloadEndpoint="ci--3510.3.7--a--c36c8d7be6-k8s-calico--apiserver--7b85fdb584--rpw9l-" Mar 17 18:52:18.222680 env[1566]: 2025-03-17 18:52:17.890 [INFO][4140] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="58da4f6e5fc27c8cee99933b7e9aaf356eed0af0485adfed2473be01b66d9934" Namespace="calico-apiserver" Pod="calico-apiserver-7b85fdb584-rpw9l" WorkloadEndpoint="ci--3510.3.7--a--c36c8d7be6-k8s-calico--apiserver--7b85fdb584--rpw9l-eth0" Mar 17 18:52:18.222680 env[1566]: 2025-03-17 18:52:17.932 [INFO][4164] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="58da4f6e5fc27c8cee99933b7e9aaf356eed0af0485adfed2473be01b66d9934" HandleID="k8s-pod-network.58da4f6e5fc27c8cee99933b7e9aaf356eed0af0485adfed2473be01b66d9934" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-calico--apiserver--7b85fdb584--rpw9l-eth0" Mar 17 18:52:18.222680 env[1566]: 2025-03-17 18:52:17.950 [INFO][4164] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="58da4f6e5fc27c8cee99933b7e9aaf356eed0af0485adfed2473be01b66d9934" HandleID="k8s-pod-network.58da4f6e5fc27c8cee99933b7e9aaf356eed0af0485adfed2473be01b66d9934" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-calico--apiserver--7b85fdb584--rpw9l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002a07f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510.3.7-a-c36c8d7be6", "pod":"calico-apiserver-7b85fdb584-rpw9l", "timestamp":"2025-03-17 18:52:17.932165431 +0000 UTC"}, Hostname:"ci-3510.3.7-a-c36c8d7be6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 18:52:18.222680 env[1566]: 2025-03-17 18:52:17.950 [INFO][4164] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:52:18.222680 env[1566]: 2025-03-17 18:52:17.950 [INFO][4164] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:52:18.222680 env[1566]: 2025-03-17 18:52:17.950 [INFO][4164] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.7-a-c36c8d7be6' Mar 17 18:52:18.222680 env[1566]: 2025-03-17 18:52:17.952 [INFO][4164] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.58da4f6e5fc27c8cee99933b7e9aaf356eed0af0485adfed2473be01b66d9934" host="ci-3510.3.7-a-c36c8d7be6" Mar 17 18:52:18.222680 env[1566]: 2025-03-17 18:52:17.957 [INFO][4164] ipam/ipam.go 372: Looking up existing affinities for host host="ci-3510.3.7-a-c36c8d7be6" Mar 17 18:52:18.222680 env[1566]: 2025-03-17 18:52:17.969 [INFO][4164] ipam/ipam.go 489: Trying affinity for 192.168.10.128/26 host="ci-3510.3.7-a-c36c8d7be6" Mar 17 18:52:18.222680 env[1566]: 2025-03-17 18:52:17.982 [INFO][4164] ipam/ipam.go 155: Attempting to load block cidr=192.168.10.128/26 host="ci-3510.3.7-a-c36c8d7be6" Mar 17 18:52:18.222680 env[1566]: 2025-03-17 18:52:17.985 [INFO][4164] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.10.128/26 host="ci-3510.3.7-a-c36c8d7be6" Mar 17 18:52:18.222680 env[1566]: 2025-03-17 18:52:17.985 [INFO][4164] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.10.128/26 handle="k8s-pod-network.58da4f6e5fc27c8cee99933b7e9aaf356eed0af0485adfed2473be01b66d9934" host="ci-3510.3.7-a-c36c8d7be6" Mar 17 18:52:18.222680 env[1566]: 2025-03-17 18:52:17.996 [INFO][4164] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.58da4f6e5fc27c8cee99933b7e9aaf356eed0af0485adfed2473be01b66d9934 Mar 17 18:52:18.222680 env[1566]: 2025-03-17 18:52:18.050 [INFO][4164] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.10.128/26 handle="k8s-pod-network.58da4f6e5fc27c8cee99933b7e9aaf356eed0af0485adfed2473be01b66d9934" host="ci-3510.3.7-a-c36c8d7be6" Mar 17 18:52:18.222680 env[1566]: 2025-03-17 18:52:18.118 [INFO][4164] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.10.130/26] block=192.168.10.128/26 handle="k8s-pod-network.58da4f6e5fc27c8cee99933b7e9aaf356eed0af0485adfed2473be01b66d9934" host="ci-3510.3.7-a-c36c8d7be6" Mar 17 18:52:18.222680 env[1566]: 2025-03-17 18:52:18.118 [INFO][4164] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.10.130/26] handle="k8s-pod-network.58da4f6e5fc27c8cee99933b7e9aaf356eed0af0485adfed2473be01b66d9934" host="ci-3510.3.7-a-c36c8d7be6" Mar 17 18:52:18.222680 env[1566]: 2025-03-17 18:52:18.118 [INFO][4164] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:52:18.222680 env[1566]: 2025-03-17 18:52:18.118 [INFO][4164] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.10.130/26] IPv6=[] ContainerID="58da4f6e5fc27c8cee99933b7e9aaf356eed0af0485adfed2473be01b66d9934" HandleID="k8s-pod-network.58da4f6e5fc27c8cee99933b7e9aaf356eed0af0485adfed2473be01b66d9934" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-calico--apiserver--7b85fdb584--rpw9l-eth0" Mar 17 18:52:18.224165 env[1566]: 2025-03-17 18:52:18.120 [INFO][4140] cni-plugin/k8s.go 386: Populated endpoint ContainerID="58da4f6e5fc27c8cee99933b7e9aaf356eed0af0485adfed2473be01b66d9934" Namespace="calico-apiserver" Pod="calico-apiserver-7b85fdb584-rpw9l" WorkloadEndpoint="ci--3510.3.7--a--c36c8d7be6-k8s-calico--apiserver--7b85fdb584--rpw9l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--a--c36c8d7be6-k8s-calico--apiserver--7b85fdb584--rpw9l-eth0", GenerateName:"calico-apiserver-7b85fdb584-", Namespace:"calico-apiserver", SelfLink:"", UID:"7edfd8b5-e741-4ba4-b5a0-38f1929fe58b", ResourceVersion:"751", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 51, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b85fdb584", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-a-c36c8d7be6", ContainerID:"", Pod:"calico-apiserver-7b85fdb584-rpw9l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.10.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic078a3256b2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:52:18.224165 env[1566]: 2025-03-17 18:52:18.120 [INFO][4140] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.10.130/32] ContainerID="58da4f6e5fc27c8cee99933b7e9aaf356eed0af0485adfed2473be01b66d9934" Namespace="calico-apiserver" Pod="calico-apiserver-7b85fdb584-rpw9l" WorkloadEndpoint="ci--3510.3.7--a--c36c8d7be6-k8s-calico--apiserver--7b85fdb584--rpw9l-eth0" Mar 17 18:52:18.224165 env[1566]: 2025-03-17 18:52:18.120 [INFO][4140] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic078a3256b2 ContainerID="58da4f6e5fc27c8cee99933b7e9aaf356eed0af0485adfed2473be01b66d9934" Namespace="calico-apiserver" Pod="calico-apiserver-7b85fdb584-rpw9l" WorkloadEndpoint="ci--3510.3.7--a--c36c8d7be6-k8s-calico--apiserver--7b85fdb584--rpw9l-eth0" Mar 17 18:52:18.224165 env[1566]: 2025-03-17 18:52:18.133 [INFO][4140] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="58da4f6e5fc27c8cee99933b7e9aaf356eed0af0485adfed2473be01b66d9934" Namespace="calico-apiserver" Pod="calico-apiserver-7b85fdb584-rpw9l" WorkloadEndpoint="ci--3510.3.7--a--c36c8d7be6-k8s-calico--apiserver--7b85fdb584--rpw9l-eth0" Mar 17 18:52:18.224165 env[1566]: 2025-03-17 18:52:18.134 [INFO][4140] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="58da4f6e5fc27c8cee99933b7e9aaf356eed0af0485adfed2473be01b66d9934" Namespace="calico-apiserver" Pod="calico-apiserver-7b85fdb584-rpw9l" WorkloadEndpoint="ci--3510.3.7--a--c36c8d7be6-k8s-calico--apiserver--7b85fdb584--rpw9l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--a--c36c8d7be6-k8s-calico--apiserver--7b85fdb584--rpw9l-eth0", GenerateName:"calico-apiserver-7b85fdb584-", Namespace:"calico-apiserver", SelfLink:"", UID:"7edfd8b5-e741-4ba4-b5a0-38f1929fe58b", ResourceVersion:"751", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 51, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b85fdb584", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-a-c36c8d7be6", ContainerID:"58da4f6e5fc27c8cee99933b7e9aaf356eed0af0485adfed2473be01b66d9934", Pod:"calico-apiserver-7b85fdb584-rpw9l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.10.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic078a3256b2", MAC:"a2:03:12:c6:a3:ee", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:52:18.224165 env[1566]: 2025-03-17 18:52:18.214 [INFO][4140] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="58da4f6e5fc27c8cee99933b7e9aaf356eed0af0485adfed2473be01b66d9934" Namespace="calico-apiserver" Pod="calico-apiserver-7b85fdb584-rpw9l" WorkloadEndpoint="ci--3510.3.7--a--c36c8d7be6-k8s-calico--apiserver--7b85fdb584--rpw9l-eth0" Mar 17 18:52:18.224165 env[1566]: time="2025-03-17T18:52:18.223854261Z" level=info msg="StartContainer for \"168fcdca4fc606882d682784859c4143cdcf9a7a50452ecb031dd6d7ee617b23\" returns successfully" Mar 17 18:52:18.256082 env[1566]: time="2025-03-17T18:52:18.255995134Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:52:18.256082 env[1566]: time="2025-03-17T18:52:18.256037813Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:52:18.256082 env[1566]: time="2025-03-17T18:52:18.256049413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:52:18.256922 env[1566]: time="2025-03-17T18:52:18.256439725Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/58da4f6e5fc27c8cee99933b7e9aaf356eed0af0485adfed2473be01b66d9934 pid=4281 runtime=io.containerd.runc.v2 Mar 17 18:52:18.302205 env[1566]: time="2025-03-17T18:52:18.302155045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b85fdb584-rpw9l,Uid:7edfd8b5-e741-4ba4-b5a0-38f1929fe58b,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"58da4f6e5fc27c8cee99933b7e9aaf356eed0af0485adfed2473be01b66d9934\"" Mar 17 18:52:18.305520 env[1566]: time="2025-03-17T18:52:18.304311521Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Mar 17 18:52:18.532840 env[1566]: time="2025-03-17T18:52:18.532719322Z" level=info msg="StopPodSandbox for \"b96d7dd36c70dfdd23967d114fa64e7431ab0f79cc1035c1d755e3d694374f0a\"" Mar 17 18:52:18.534115 env[1566]: time="2025-03-17T18:52:18.534073015Z" level=info msg="StopPodSandbox for \"e065047c826c98b68e6041689fa20f77b100c59a650cc57df87ad96e87c51334\"" Mar 17 18:52:18.680644 env[1566]: 2025-03-17 18:52:18.607 [INFO][4352] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e065047c826c98b68e6041689fa20f77b100c59a650cc57df87ad96e87c51334" Mar 17 18:52:18.680644 env[1566]: 2025-03-17 18:52:18.607 [INFO][4352] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e065047c826c98b68e6041689fa20f77b100c59a650cc57df87ad96e87c51334" iface="eth0" netns="/var/run/netns/cni-86ec2747-c494-f370-be5c-50b8d35c1cbd" Mar 17 18:52:18.680644 env[1566]: 2025-03-17 18:52:18.607 [INFO][4352] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e065047c826c98b68e6041689fa20f77b100c59a650cc57df87ad96e87c51334" iface="eth0" netns="/var/run/netns/cni-86ec2747-c494-f370-be5c-50b8d35c1cbd" Mar 17 18:52:18.680644 env[1566]: 2025-03-17 18:52:18.607 [INFO][4352] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e065047c826c98b68e6041689fa20f77b100c59a650cc57df87ad96e87c51334" iface="eth0" netns="/var/run/netns/cni-86ec2747-c494-f370-be5c-50b8d35c1cbd" Mar 17 18:52:18.680644 env[1566]: 2025-03-17 18:52:18.607 [INFO][4352] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e065047c826c98b68e6041689fa20f77b100c59a650cc57df87ad96e87c51334" Mar 17 18:52:18.680644 env[1566]: 2025-03-17 18:52:18.608 [INFO][4352] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e065047c826c98b68e6041689fa20f77b100c59a650cc57df87ad96e87c51334" Mar 17 18:52:18.680644 env[1566]: 2025-03-17 18:52:18.640 [INFO][4360] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e065047c826c98b68e6041689fa20f77b100c59a650cc57df87ad96e87c51334" HandleID="k8s-pod-network.e065047c826c98b68e6041689fa20f77b100c59a650cc57df87ad96e87c51334" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-coredns--7db6d8ff4d--d5v52-eth0" Mar 17 18:52:18.680644 env[1566]: 2025-03-17 18:52:18.640 [INFO][4360] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:52:18.680644 env[1566]: 2025-03-17 18:52:18.640 [INFO][4360] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:52:18.680644 env[1566]: 2025-03-17 18:52:18.652 [WARNING][4360] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e065047c826c98b68e6041689fa20f77b100c59a650cc57df87ad96e87c51334" HandleID="k8s-pod-network.e065047c826c98b68e6041689fa20f77b100c59a650cc57df87ad96e87c51334" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-coredns--7db6d8ff4d--d5v52-eth0" Mar 17 18:52:18.680644 env[1566]: 2025-03-17 18:52:18.653 [INFO][4360] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e065047c826c98b68e6041689fa20f77b100c59a650cc57df87ad96e87c51334" HandleID="k8s-pod-network.e065047c826c98b68e6041689fa20f77b100c59a650cc57df87ad96e87c51334" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-coredns--7db6d8ff4d--d5v52-eth0" Mar 17 18:52:18.680644 env[1566]: 2025-03-17 18:52:18.672 [INFO][4360] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:52:18.680644 env[1566]: 2025-03-17 18:52:18.676 [INFO][4352] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e065047c826c98b68e6041689fa20f77b100c59a650cc57df87ad96e87c51334" Mar 17 18:52:18.683553 env[1566]: time="2025-03-17T18:52:18.680840860Z" level=info msg="TearDown network for sandbox \"e065047c826c98b68e6041689fa20f77b100c59a650cc57df87ad96e87c51334\" successfully" Mar 17 18:52:18.683553 env[1566]: time="2025-03-17T18:52:18.680932818Z" level=info msg="StopPodSandbox for \"e065047c826c98b68e6041689fa20f77b100c59a650cc57df87ad96e87c51334\" returns successfully" Mar 17 18:52:18.683553 env[1566]: time="2025-03-17T18:52:18.681706322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-d5v52,Uid:3418e3de-8e64-4cf5-99b4-25c5564ac718,Namespace:kube-system,Attempt:1,}" Mar 17 18:52:18.684592 systemd[1]: run-netns-cni\x2d86ec2747\x2dc494\x2df370\x2dbe5c\x2d50b8d35c1cbd.mount: Deactivated successfully. Mar 17 18:52:18.719461 kubelet[2763]: I0317 18:52:18.717830 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-wlmhz" podStartSLOduration=33.717800555 podStartE2EDuration="33.717800555s" podCreationTimestamp="2025-03-17 18:51:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:52:18.717005291 +0000 UTC m=+47.665209407" watchObservedRunningTime="2025-03-17 18:52:18.717800555 +0000 UTC m=+47.666004671" Mar 17 18:52:18.725000 audit[4375]: NETFILTER_CFG table=filter:100 family=2 entries=18 op=nft_register_rule pid=4375 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:52:18.725000 audit[4375]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6652 a0=3 a1=ffffefb8a0a0 a2=0 a3=1 items=0 ppid=2898 pid=4375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:18.725000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:52:18.729000 audit[4375]: NETFILTER_CFG table=nat:101 family=2 entries=12 op=nft_register_rule pid=4375 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:52:18.729000 audit[4375]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffefb8a0a0 a2=0 a3=1 items=0 ppid=2898 pid=4375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:18.729000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:52:18.736318 env[1566]: 2025-03-17 18:52:18.608 [INFO][4344] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b96d7dd36c70dfdd23967d114fa64e7431ab0f79cc1035c1d755e3d694374f0a" Mar 17 18:52:18.736318 env[1566]: 2025-03-17 18:52:18.614 [INFO][4344] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b96d7dd36c70dfdd23967d114fa64e7431ab0f79cc1035c1d755e3d694374f0a" iface="eth0" netns="/var/run/netns/cni-4546b85a-eed0-55f9-08b9-c482e956b61a" Mar 17 18:52:18.736318 env[1566]: 2025-03-17 18:52:18.614 [INFO][4344] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b96d7dd36c70dfdd23967d114fa64e7431ab0f79cc1035c1d755e3d694374f0a" iface="eth0" netns="/var/run/netns/cni-4546b85a-eed0-55f9-08b9-c482e956b61a" Mar 17 18:52:18.736318 env[1566]: 2025-03-17 18:52:18.614 [INFO][4344] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b96d7dd36c70dfdd23967d114fa64e7431ab0f79cc1035c1d755e3d694374f0a" iface="eth0" netns="/var/run/netns/cni-4546b85a-eed0-55f9-08b9-c482e956b61a" Mar 17 18:52:18.736318 env[1566]: 2025-03-17 18:52:18.614 [INFO][4344] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b96d7dd36c70dfdd23967d114fa64e7431ab0f79cc1035c1d755e3d694374f0a" Mar 17 18:52:18.736318 env[1566]: 2025-03-17 18:52:18.614 [INFO][4344] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b96d7dd36c70dfdd23967d114fa64e7431ab0f79cc1035c1d755e3d694374f0a" Mar 17 18:52:18.736318 env[1566]: 2025-03-17 18:52:18.673 [INFO][4365] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b96d7dd36c70dfdd23967d114fa64e7431ab0f79cc1035c1d755e3d694374f0a" HandleID="k8s-pod-network.b96d7dd36c70dfdd23967d114fa64e7431ab0f79cc1035c1d755e3d694374f0a" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-calico--kube--controllers--556dcdff49--tlrdg-eth0" Mar 17 18:52:18.736318 env[1566]: 2025-03-17 18:52:18.674 [INFO][4365] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:52:18.736318 env[1566]: 2025-03-17 18:52:18.674 [INFO][4365] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:52:18.736318 env[1566]: 2025-03-17 18:52:18.721 [WARNING][4365] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b96d7dd36c70dfdd23967d114fa64e7431ab0f79cc1035c1d755e3d694374f0a" HandleID="k8s-pod-network.b96d7dd36c70dfdd23967d114fa64e7431ab0f79cc1035c1d755e3d694374f0a" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-calico--kube--controllers--556dcdff49--tlrdg-eth0" Mar 17 18:52:18.736318 env[1566]: 2025-03-17 18:52:18.722 [INFO][4365] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b96d7dd36c70dfdd23967d114fa64e7431ab0f79cc1035c1d755e3d694374f0a" HandleID="k8s-pod-network.b96d7dd36c70dfdd23967d114fa64e7431ab0f79cc1035c1d755e3d694374f0a" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-calico--kube--controllers--556dcdff49--tlrdg-eth0" Mar 17 18:52:18.736318 env[1566]: 2025-03-17 18:52:18.725 [INFO][4365] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:52:18.736318 env[1566]: 2025-03-17 18:52:18.728 [INFO][4344] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b96d7dd36c70dfdd23967d114fa64e7431ab0f79cc1035c1d755e3d694374f0a" Mar 17 18:52:18.736713 systemd[1]: run-netns-cni\x2d4546b85a\x2deed0\x2d55f9\x2d08b9\x2dc482e956b61a.mount: Deactivated successfully. Mar 17 18:52:18.737906 env[1566]: time="2025-03-17T18:52:18.737434320Z" level=info msg="TearDown network for sandbox \"b96d7dd36c70dfdd23967d114fa64e7431ab0f79cc1035c1d755e3d694374f0a\" successfully" Mar 17 18:52:18.738058 env[1566]: time="2025-03-17T18:52:18.738031188Z" level=info msg="StopPodSandbox for \"b96d7dd36c70dfdd23967d114fa64e7431ab0f79cc1035c1d755e3d694374f0a\" returns successfully" Mar 17 18:52:18.739685 env[1566]: time="2025-03-17T18:52:18.739642236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-556dcdff49-tlrdg,Uid:65d68753-4b96-44f6-80b9-42dfef957a45,Namespace:calico-system,Attempt:1,}" Mar 17 18:52:18.830000 audit[4404]: NETFILTER_CFG table=filter:102 family=2 entries=15 op=nft_register_rule pid=4404 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:52:18.830000 audit[4404]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4420 a0=3 a1=ffffe8d26360 a2=0 a3=1 items=0 ppid=2898 pid=4404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:18.830000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:52:18.843000 audit[4404]: NETFILTER_CFG table=nat:103 family=2 entries=33 op=nft_register_chain pid=4404 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:52:18.843000 audit[4404]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=13428 a0=3 a1=ffffe8d26360 a2=0 a3=1 items=0 ppid=2898 pid=4404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:18.843000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:52:19.019492 systemd-networkd[1734]: cali36c11c6a3a3: Link UP Mar 17 18:52:19.048834 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Mar 17 18:52:19.048976 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali36c11c6a3a3: link becomes ready Mar 17 18:52:19.049487 systemd-networkd[1734]: cali36c11c6a3a3: Gained carrier Mar 17 18:52:19.066400 env[1566]: 2025-03-17 18:52:18.789 [INFO][4376] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 18:52:19.066400 env[1566]: 2025-03-17 18:52:18.822 [INFO][4376] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.7--a--c36c8d7be6-k8s-coredns--7db6d8ff4d--d5v52-eth0 coredns-7db6d8ff4d- kube-system 3418e3de-8e64-4cf5-99b4-25c5564ac718 766 0 2025-03-17 18:51:45 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510.3.7-a-c36c8d7be6 coredns-7db6d8ff4d-d5v52 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali36c11c6a3a3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="e6f80ddc0cb0a442df8f9ae6d125267853e028506b5c30e2c1bf70381589f633" Namespace="kube-system" Pod="coredns-7db6d8ff4d-d5v52" WorkloadEndpoint="ci--3510.3.7--a--c36c8d7be6-k8s-coredns--7db6d8ff4d--d5v52-" Mar 17 18:52:19.066400 env[1566]: 2025-03-17 18:52:18.822 [INFO][4376] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e6f80ddc0cb0a442df8f9ae6d125267853e028506b5c30e2c1bf70381589f633" Namespace="kube-system" Pod="coredns-7db6d8ff4d-d5v52" WorkloadEndpoint="ci--3510.3.7--a--c36c8d7be6-k8s-coredns--7db6d8ff4d--d5v52-eth0" Mar 17 18:52:19.066400 env[1566]: 2025-03-17 18:52:18.936 [INFO][4407] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e6f80ddc0cb0a442df8f9ae6d125267853e028506b5c30e2c1bf70381589f633" HandleID="k8s-pod-network.e6f80ddc0cb0a442df8f9ae6d125267853e028506b5c30e2c1bf70381589f633" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-coredns--7db6d8ff4d--d5v52-eth0" Mar 17 18:52:19.066400 env[1566]: 2025-03-17 18:52:18.948 [INFO][4407] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e6f80ddc0cb0a442df8f9ae6d125267853e028506b5c30e2c1bf70381589f633" HandleID="k8s-pod-network.e6f80ddc0cb0a442df8f9ae6d125267853e028506b5c30e2c1bf70381589f633" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-coredns--7db6d8ff4d--d5v52-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000330c80), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510.3.7-a-c36c8d7be6", "pod":"coredns-7db6d8ff4d-d5v52", "timestamp":"2025-03-17 18:52:18.936292516 +0000 UTC"}, Hostname:"ci-3510.3.7-a-c36c8d7be6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 18:52:19.066400 env[1566]: 2025-03-17 18:52:18.949 [INFO][4407] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:52:19.066400 env[1566]: 2025-03-17 18:52:18.949 [INFO][4407] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:52:19.066400 env[1566]: 2025-03-17 18:52:18.949 [INFO][4407] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.7-a-c36c8d7be6' Mar 17 18:52:19.066400 env[1566]: 2025-03-17 18:52:18.952 [INFO][4407] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e6f80ddc0cb0a442df8f9ae6d125267853e028506b5c30e2c1bf70381589f633" host="ci-3510.3.7-a-c36c8d7be6" Mar 17 18:52:19.066400 env[1566]: 2025-03-17 18:52:18.956 [INFO][4407] ipam/ipam.go 372: Looking up existing affinities for host host="ci-3510.3.7-a-c36c8d7be6" Mar 17 18:52:19.066400 env[1566]: 2025-03-17 18:52:18.980 [INFO][4407] ipam/ipam.go 489: Trying affinity for 192.168.10.128/26 host="ci-3510.3.7-a-c36c8d7be6" Mar 17 18:52:19.066400 env[1566]: 2025-03-17 18:52:18.984 [INFO][4407] ipam/ipam.go 155: Attempting to load block cidr=192.168.10.128/26 host="ci-3510.3.7-a-c36c8d7be6" Mar 17 18:52:19.066400 env[1566]: 2025-03-17 18:52:18.987 [INFO][4407] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.10.128/26 host="ci-3510.3.7-a-c36c8d7be6" Mar 17 18:52:19.066400 env[1566]: 2025-03-17 18:52:18.987 [INFO][4407] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.10.128/26 handle="k8s-pod-network.e6f80ddc0cb0a442df8f9ae6d125267853e028506b5c30e2c1bf70381589f633" host="ci-3510.3.7-a-c36c8d7be6" Mar 17 18:52:19.066400 env[1566]: 2025-03-17 18:52:18.990 [INFO][4407] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e6f80ddc0cb0a442df8f9ae6d125267853e028506b5c30e2c1bf70381589f633 Mar 17 18:52:19.066400 env[1566]: 2025-03-17 18:52:18.998 [INFO][4407] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.10.128/26 handle="k8s-pod-network.e6f80ddc0cb0a442df8f9ae6d125267853e028506b5c30e2c1bf70381589f633" host="ci-3510.3.7-a-c36c8d7be6" Mar 17 18:52:19.066400 env[1566]: 2025-03-17 18:52:19.008 [INFO][4407] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.10.131/26] block=192.168.10.128/26 handle="k8s-pod-network.e6f80ddc0cb0a442df8f9ae6d125267853e028506b5c30e2c1bf70381589f633" host="ci-3510.3.7-a-c36c8d7be6" Mar 17 18:52:19.066400 env[1566]: 2025-03-17 18:52:19.008 [INFO][4407] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.10.131/26] handle="k8s-pod-network.e6f80ddc0cb0a442df8f9ae6d125267853e028506b5c30e2c1bf70381589f633" host="ci-3510.3.7-a-c36c8d7be6" Mar 17 18:52:19.066400 env[1566]: 2025-03-17 18:52:19.008 [INFO][4407] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:52:19.066400 env[1566]: 2025-03-17 18:52:19.008 [INFO][4407] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.10.131/26] IPv6=[] ContainerID="e6f80ddc0cb0a442df8f9ae6d125267853e028506b5c30e2c1bf70381589f633" HandleID="k8s-pod-network.e6f80ddc0cb0a442df8f9ae6d125267853e028506b5c30e2c1bf70381589f633" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-coredns--7db6d8ff4d--d5v52-eth0" Mar 17 18:52:19.067078 env[1566]: 2025-03-17 18:52:19.009 [INFO][4376] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e6f80ddc0cb0a442df8f9ae6d125267853e028506b5c30e2c1bf70381589f633" Namespace="kube-system" Pod="coredns-7db6d8ff4d-d5v52" WorkloadEndpoint="ci--3510.3.7--a--c36c8d7be6-k8s-coredns--7db6d8ff4d--d5v52-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--a--c36c8d7be6-k8s-coredns--7db6d8ff4d--d5v52-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"3418e3de-8e64-4cf5-99b4-25c5564ac718", ResourceVersion:"766", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 51, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-a-c36c8d7be6", ContainerID:"", Pod:"coredns-7db6d8ff4d-d5v52", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.10.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali36c11c6a3a3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:52:19.067078 env[1566]: 2025-03-17 18:52:19.009 [INFO][4376] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.10.131/32] ContainerID="e6f80ddc0cb0a442df8f9ae6d125267853e028506b5c30e2c1bf70381589f633" Namespace="kube-system" Pod="coredns-7db6d8ff4d-d5v52" WorkloadEndpoint="ci--3510.3.7--a--c36c8d7be6-k8s-coredns--7db6d8ff4d--d5v52-eth0" Mar 17 18:52:19.067078 env[1566]: 2025-03-17 18:52:19.009 [INFO][4376] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali36c11c6a3a3 ContainerID="e6f80ddc0cb0a442df8f9ae6d125267853e028506b5c30e2c1bf70381589f633" Namespace="kube-system" Pod="coredns-7db6d8ff4d-d5v52" WorkloadEndpoint="ci--3510.3.7--a--c36c8d7be6-k8s-coredns--7db6d8ff4d--d5v52-eth0" Mar 17 18:52:19.067078 env[1566]: 2025-03-17 18:52:19.050 [INFO][4376] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e6f80ddc0cb0a442df8f9ae6d125267853e028506b5c30e2c1bf70381589f633" Namespace="kube-system" Pod="coredns-7db6d8ff4d-d5v52" WorkloadEndpoint="ci--3510.3.7--a--c36c8d7be6-k8s-coredns--7db6d8ff4d--d5v52-eth0" Mar 17 18:52:19.067078 env[1566]: 2025-03-17 18:52:19.050 [INFO][4376] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e6f80ddc0cb0a442df8f9ae6d125267853e028506b5c30e2c1bf70381589f633" Namespace="kube-system" Pod="coredns-7db6d8ff4d-d5v52" WorkloadEndpoint="ci--3510.3.7--a--c36c8d7be6-k8s-coredns--7db6d8ff4d--d5v52-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--a--c36c8d7be6-k8s-coredns--7db6d8ff4d--d5v52-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"3418e3de-8e64-4cf5-99b4-25c5564ac718", ResourceVersion:"766", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 51, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-a-c36c8d7be6", ContainerID:"e6f80ddc0cb0a442df8f9ae6d125267853e028506b5c30e2c1bf70381589f633", Pod:"coredns-7db6d8ff4d-d5v52", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.10.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali36c11c6a3a3", MAC:"0a:24:31:b4:7a:43", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:52:19.067078 env[1566]: 2025-03-17 18:52:19.064 [INFO][4376] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e6f80ddc0cb0a442df8f9ae6d125267853e028506b5c30e2c1bf70381589f633" Namespace="kube-system" Pod="coredns-7db6d8ff4d-d5v52" WorkloadEndpoint="ci--3510.3.7--a--c36c8d7be6-k8s-coredns--7db6d8ff4d--d5v52-eth0" Mar 17 18:52:19.096305 env[1566]: time="2025-03-17T18:52:19.095998641Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:52:19.096305 env[1566]: time="2025-03-17T18:52:19.096041881Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:52:19.096305 env[1566]: time="2025-03-17T18:52:19.096052720Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:52:19.096523 env[1566]: time="2025-03-17T18:52:19.096263276Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e6f80ddc0cb0a442df8f9ae6d125267853e028506b5c30e2c1bf70381589f633 pid=4453 runtime=io.containerd.runc.v2 Mar 17 18:52:19.117298 systemd-networkd[1734]: calif4651768243: Link UP Mar 17 18:52:19.129147 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calif4651768243: link becomes ready Mar 17 18:52:19.128892 systemd-networkd[1734]: calif4651768243: Gained carrier Mar 17 18:52:19.143214 systemd-networkd[1734]: calid25cc6a53dd: Gained IPv6LL Mar 17 18:52:19.157576 env[1566]: 2025-03-17 18:52:18.874 [INFO][4394] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 18:52:19.157576 env[1566]: 2025-03-17 18:52:18.895 [INFO][4394] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.7--a--c36c8d7be6-k8s-calico--kube--controllers--556dcdff49--tlrdg-eth0 calico-kube-controllers-556dcdff49- calico-system 65d68753-4b96-44f6-80b9-42dfef957a45 767 0 2025-03-17 18:51:55 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:556dcdff49 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3510.3.7-a-c36c8d7be6 calico-kube-controllers-556dcdff49-tlrdg eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif4651768243 [] []}} ContainerID="0c026311f0bb1687634f08a563b5b0b259ec01a653e44641369fd636b402e40e" Namespace="calico-system" Pod="calico-kube-controllers-556dcdff49-tlrdg" WorkloadEndpoint="ci--3510.3.7--a--c36c8d7be6-k8s-calico--kube--controllers--556dcdff49--tlrdg-" Mar 17 18:52:19.157576 env[1566]: 2025-03-17 18:52:18.895 [INFO][4394] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0c026311f0bb1687634f08a563b5b0b259ec01a653e44641369fd636b402e40e" Namespace="calico-system" Pod="calico-kube-controllers-556dcdff49-tlrdg" WorkloadEndpoint="ci--3510.3.7--a--c36c8d7be6-k8s-calico--kube--controllers--556dcdff49--tlrdg-eth0" Mar 17 18:52:19.157576 env[1566]: 2025-03-17 18:52:19.003 [INFO][4419] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0c026311f0bb1687634f08a563b5b0b259ec01a653e44641369fd636b402e40e" HandleID="k8s-pod-network.0c026311f0bb1687634f08a563b5b0b259ec01a653e44641369fd636b402e40e" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-calico--kube--controllers--556dcdff49--tlrdg-eth0" Mar 17 18:52:19.157576 env[1566]: 2025-03-17 18:52:19.037 [INFO][4419] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0c026311f0bb1687634f08a563b5b0b259ec01a653e44641369fd636b402e40e" HandleID="k8s-pod-network.0c026311f0bb1687634f08a563b5b0b259ec01a653e44641369fd636b402e40e" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-calico--kube--controllers--556dcdff49--tlrdg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003989f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.7-a-c36c8d7be6", "pod":"calico-kube-controllers-556dcdff49-tlrdg", "timestamp":"2025-03-17 18:52:19.003717319 +0000 UTC"}, Hostname:"ci-3510.3.7-a-c36c8d7be6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 18:52:19.157576 env[1566]: 2025-03-17 18:52:19.037 [INFO][4419] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:52:19.157576 env[1566]: 2025-03-17 18:52:19.037 [INFO][4419] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:52:19.157576 env[1566]: 2025-03-17 18:52:19.037 [INFO][4419] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.7-a-c36c8d7be6' Mar 17 18:52:19.157576 env[1566]: 2025-03-17 18:52:19.038 [INFO][4419] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0c026311f0bb1687634f08a563b5b0b259ec01a653e44641369fd636b402e40e" host="ci-3510.3.7-a-c36c8d7be6" Mar 17 18:52:19.157576 env[1566]: 2025-03-17 18:52:19.044 [INFO][4419] ipam/ipam.go 372: Looking up existing affinities for host host="ci-3510.3.7-a-c36c8d7be6" Mar 17 18:52:19.157576 env[1566]: 2025-03-17 18:52:19.048 [INFO][4419] ipam/ipam.go 489: Trying affinity for 192.168.10.128/26 host="ci-3510.3.7-a-c36c8d7be6" Mar 17 18:52:19.157576 env[1566]: 2025-03-17 18:52:19.056 [INFO][4419] ipam/ipam.go 155: Attempting to load block cidr=192.168.10.128/26 host="ci-3510.3.7-a-c36c8d7be6" Mar 17 18:52:19.157576 env[1566]: 2025-03-17 18:52:19.068 [INFO][4419] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.10.128/26 host="ci-3510.3.7-a-c36c8d7be6" Mar 17 18:52:19.157576 env[1566]: 2025-03-17 18:52:19.068 [INFO][4419] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.10.128/26 handle="k8s-pod-network.0c026311f0bb1687634f08a563b5b0b259ec01a653e44641369fd636b402e40e" host="ci-3510.3.7-a-c36c8d7be6" Mar 17 18:52:19.157576 env[1566]: 2025-03-17 18:52:19.069 [INFO][4419] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0c026311f0bb1687634f08a563b5b0b259ec01a653e44641369fd636b402e40e Mar 17 18:52:19.157576 env[1566]: 2025-03-17 18:52:19.074 [INFO][4419] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.10.128/26 handle="k8s-pod-network.0c026311f0bb1687634f08a563b5b0b259ec01a653e44641369fd636b402e40e" host="ci-3510.3.7-a-c36c8d7be6" Mar 17 18:52:19.157576 env[1566]: 2025-03-17 18:52:19.109 [INFO][4419] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.10.132/26] block=192.168.10.128/26 handle="k8s-pod-network.0c026311f0bb1687634f08a563b5b0b259ec01a653e44641369fd636b402e40e" host="ci-3510.3.7-a-c36c8d7be6" Mar 17 18:52:19.157576 env[1566]: 2025-03-17 18:52:19.109 [INFO][4419] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.10.132/26] handle="k8s-pod-network.0c026311f0bb1687634f08a563b5b0b259ec01a653e44641369fd636b402e40e" host="ci-3510.3.7-a-c36c8d7be6" Mar 17 18:52:19.157576 env[1566]: 2025-03-17 18:52:19.109 [INFO][4419] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:52:19.157576 env[1566]: 2025-03-17 18:52:19.109 [INFO][4419] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.10.132/26] IPv6=[] ContainerID="0c026311f0bb1687634f08a563b5b0b259ec01a653e44641369fd636b402e40e" HandleID="k8s-pod-network.0c026311f0bb1687634f08a563b5b0b259ec01a653e44641369fd636b402e40e" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-calico--kube--controllers--556dcdff49--tlrdg-eth0" Mar 17 18:52:19.160169 env[1566]: 2025-03-17 18:52:19.114 [INFO][4394] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0c026311f0bb1687634f08a563b5b0b259ec01a653e44641369fd636b402e40e" Namespace="calico-system" Pod="calico-kube-controllers-556dcdff49-tlrdg" WorkloadEndpoint="ci--3510.3.7--a--c36c8d7be6-k8s-calico--kube--controllers--556dcdff49--tlrdg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--a--c36c8d7be6-k8s-calico--kube--controllers--556dcdff49--tlrdg-eth0", GenerateName:"calico-kube-controllers-556dcdff49-", Namespace:"calico-system", SelfLink:"", UID:"65d68753-4b96-44f6-80b9-42dfef957a45", ResourceVersion:"767", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 51, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"556dcdff49", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-a-c36c8d7be6", ContainerID:"", Pod:"calico-kube-controllers-556dcdff49-tlrdg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.10.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif4651768243", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:52:19.160169 env[1566]: 2025-03-17 18:52:19.114 [INFO][4394] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.10.132/32] ContainerID="0c026311f0bb1687634f08a563b5b0b259ec01a653e44641369fd636b402e40e" Namespace="calico-system" Pod="calico-kube-controllers-556dcdff49-tlrdg" WorkloadEndpoint="ci--3510.3.7--a--c36c8d7be6-k8s-calico--kube--controllers--556dcdff49--tlrdg-eth0" Mar 17 18:52:19.160169 env[1566]: 2025-03-17 18:52:19.114 [INFO][4394] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif4651768243 ContainerID="0c026311f0bb1687634f08a563b5b0b259ec01a653e44641369fd636b402e40e" Namespace="calico-system" Pod="calico-kube-controllers-556dcdff49-tlrdg" WorkloadEndpoint="ci--3510.3.7--a--c36c8d7be6-k8s-calico--kube--controllers--556dcdff49--tlrdg-eth0" Mar 17 18:52:19.160169 env[1566]: 2025-03-17 18:52:19.130 [INFO][4394] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0c026311f0bb1687634f08a563b5b0b259ec01a653e44641369fd636b402e40e" Namespace="calico-system" Pod="calico-kube-controllers-556dcdff49-tlrdg" WorkloadEndpoint="ci--3510.3.7--a--c36c8d7be6-k8s-calico--kube--controllers--556dcdff49--tlrdg-eth0" Mar 17 18:52:19.160169 env[1566]: 2025-03-17 18:52:19.131 [INFO][4394] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0c026311f0bb1687634f08a563b5b0b259ec01a653e44641369fd636b402e40e" Namespace="calico-system" Pod="calico-kube-controllers-556dcdff49-tlrdg" WorkloadEndpoint="ci--3510.3.7--a--c36c8d7be6-k8s-calico--kube--controllers--556dcdff49--tlrdg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--a--c36c8d7be6-k8s-calico--kube--controllers--556dcdff49--tlrdg-eth0", GenerateName:"calico-kube-controllers-556dcdff49-", Namespace:"calico-system", SelfLink:"", UID:"65d68753-4b96-44f6-80b9-42dfef957a45", ResourceVersion:"767", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 51, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"556dcdff49", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-a-c36c8d7be6", ContainerID:"0c026311f0bb1687634f08a563b5b0b259ec01a653e44641369fd636b402e40e", Pod:"calico-kube-controllers-556dcdff49-tlrdg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.10.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif4651768243", MAC:"22:4d:0e:00:76:16", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:52:19.160169 env[1566]: 2025-03-17 18:52:19.156 [INFO][4394] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0c026311f0bb1687634f08a563b5b0b259ec01a653e44641369fd636b402e40e" Namespace="calico-system" Pod="calico-kube-controllers-556dcdff49-tlrdg" WorkloadEndpoint="ci--3510.3.7--a--c36c8d7be6-k8s-calico--kube--controllers--556dcdff49--tlrdg-eth0" Mar 17 18:52:19.160663 env[1566]: time="2025-03-17T18:52:19.160602595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-d5v52,Uid:3418e3de-8e64-4cf5-99b4-25c5564ac718,Namespace:kube-system,Attempt:1,} returns sandbox id \"e6f80ddc0cb0a442df8f9ae6d125267853e028506b5c30e2c1bf70381589f633\"" Mar 17 18:52:19.167766 env[1566]: time="2025-03-17T18:52:19.167701534Z" level=info msg="CreateContainer within sandbox \"e6f80ddc0cb0a442df8f9ae6d125267853e028506b5c30e2c1bf70381589f633\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 18:52:19.214526 env[1566]: time="2025-03-17T18:52:19.211532621Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:52:19.214526 env[1566]: time="2025-03-17T18:52:19.211594580Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:52:19.214526 env[1566]: time="2025-03-17T18:52:19.211606220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:52:19.214526 env[1566]: time="2025-03-17T18:52:19.211906574Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0c026311f0bb1687634f08a563b5b0b259ec01a653e44641369fd636b402e40e pid=4503 runtime=io.containerd.runc.v2 Mar 17 18:52:19.249271 env[1566]: time="2025-03-17T18:52:19.249116033Z" level=info msg="CreateContainer within sandbox \"e6f80ddc0cb0a442df8f9ae6d125267853e028506b5c30e2c1bf70381589f633\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5583d4735dd308628fe8a0416903ddd97e6b2def67ed904cd504191e1405927c\"" Mar 17 18:52:19.251485 env[1566]: time="2025-03-17T18:52:19.251428307Z" level=info msg="StartContainer for \"5583d4735dd308628fe8a0416903ddd97e6b2def67ed904cd504191e1405927c\"" Mar 17 18:52:19.268094 env[1566]: time="2025-03-17T18:52:19.268037856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-556dcdff49-tlrdg,Uid:65d68753-4b96-44f6-80b9-42dfef957a45,Namespace:calico-system,Attempt:1,} returns sandbox id \"0c026311f0bb1687634f08a563b5b0b259ec01a653e44641369fd636b402e40e\"" Mar 17 18:52:19.310157 env[1566]: time="2025-03-17T18:52:19.310095499Z" level=info msg="StartContainer for \"5583d4735dd308628fe8a0416903ddd97e6b2def67ed904cd504191e1405927c\" returns successfully" Mar 17 18:52:19.696918 kubelet[2763]: I0317 18:52:19.696843 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-d5v52" podStartSLOduration=34.696827358 podStartE2EDuration="34.696827358s" podCreationTimestamp="2025-03-17 18:51:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:52:19.696641082 +0000 UTC m=+48.644845238" watchObservedRunningTime="2025-03-17 18:52:19.696827358 +0000 UTC m=+48.645031474" Mar 17 18:52:19.771000 audit[4577]: NETFILTER_CFG table=filter:104 family=2 entries=12 op=nft_register_rule pid=4577 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:52:19.778480 kernel: kauditd_printk_skb: 37 callbacks suppressed Mar 17 18:52:19.778653 kernel: audit: type=1325 audit(1742237539.771:316): table=filter:104 family=2 entries=12 op=nft_register_rule pid=4577 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:52:19.771000 audit[4577]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4420 a0=3 a1=ffffd2f2c540 a2=0 a3=1 items=0 ppid=2898 pid=4577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:19.771000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:52:19.832028 kernel: audit: type=1300 audit(1742237539.771:316): arch=c00000b7 syscall=211 success=yes exit=4420 a0=3 a1=ffffd2f2c540 a2=0 a3=1 items=0 ppid=2898 pid=4577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:19.832000 audit[4577]: NETFILTER_CFG table=nat:105 family=2 entries=42 op=nft_register_rule pid=4577 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:52:19.848985 kernel: audit: type=1327 audit(1742237539.771:316): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:52:19.832000 audit[4577]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=13428 a0=3 a1=ffffd2f2c540 a2=0 a3=1 items=0 ppid=2898 pid=4577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:19.870918 kernel: audit: type=1325 audit(1742237539.832:317): table=nat:105 family=2 entries=42 op=nft_register_rule pid=4577 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:52:19.832000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:52:19.916929 kernel: audit: type=1300 audit(1742237539.832:317): arch=c00000b7 syscall=211 success=yes exit=13428 a0=3 a1=ffffd2f2c540 a2=0 a3=1 items=0 ppid=2898 pid=4577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:19.917017 kernel: audit: type=1327 audit(1742237539.832:317): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:52:20.036214 systemd-networkd[1734]: calic078a3256b2: Gained IPv6LL Mar 17 18:52:20.482125 systemd-networkd[1734]: cali36c11c6a3a3: Gained IPv6LL Mar 17 18:52:20.532715 env[1566]: time="2025-03-17T18:52:20.532033604Z" level=info msg="StopPodSandbox for \"c952186de7390f865d8d07eb798aac8ce4b67164b86f2f0bc0e8c4d4b23251fa\"" Mar 17 18:52:20.672892 env[1566]: 2025-03-17 18:52:20.634 [INFO][4614] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c952186de7390f865d8d07eb798aac8ce4b67164b86f2f0bc0e8c4d4b23251fa" Mar 17 18:52:20.672892 env[1566]: 2025-03-17 18:52:20.634 [INFO][4614] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c952186de7390f865d8d07eb798aac8ce4b67164b86f2f0bc0e8c4d4b23251fa" iface="eth0" netns="/var/run/netns/cni-2c4cec33-638f-dbcb-e5c2-6d5bf7403813" Mar 17 18:52:20.672892 env[1566]: 2025-03-17 18:52:20.635 [INFO][4614] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c952186de7390f865d8d07eb798aac8ce4b67164b86f2f0bc0e8c4d4b23251fa" iface="eth0" netns="/var/run/netns/cni-2c4cec33-638f-dbcb-e5c2-6d5bf7403813" Mar 17 18:52:20.672892 env[1566]: 2025-03-17 18:52:20.635 [INFO][4614] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c952186de7390f865d8d07eb798aac8ce4b67164b86f2f0bc0e8c4d4b23251fa" iface="eth0" netns="/var/run/netns/cni-2c4cec33-638f-dbcb-e5c2-6d5bf7403813" Mar 17 18:52:20.672892 env[1566]: 2025-03-17 18:52:20.635 [INFO][4614] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c952186de7390f865d8d07eb798aac8ce4b67164b86f2f0bc0e8c4d4b23251fa" Mar 17 18:52:20.672892 env[1566]: 2025-03-17 18:52:20.635 [INFO][4614] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c952186de7390f865d8d07eb798aac8ce4b67164b86f2f0bc0e8c4d4b23251fa" Mar 17 18:52:20.672892 env[1566]: 2025-03-17 18:52:20.658 [INFO][4620] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c952186de7390f865d8d07eb798aac8ce4b67164b86f2f0bc0e8c4d4b23251fa" HandleID="k8s-pod-network.c952186de7390f865d8d07eb798aac8ce4b67164b86f2f0bc0e8c4d4b23251fa" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-csi--node--driver--g7tkt-eth0" Mar 17 18:52:20.672892 env[1566]: 2025-03-17 18:52:20.658 [INFO][4620] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:52:20.672892 env[1566]: 2025-03-17 18:52:20.658 [INFO][4620] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:52:20.672892 env[1566]: 2025-03-17 18:52:20.668 [WARNING][4620] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c952186de7390f865d8d07eb798aac8ce4b67164b86f2f0bc0e8c4d4b23251fa" HandleID="k8s-pod-network.c952186de7390f865d8d07eb798aac8ce4b67164b86f2f0bc0e8c4d4b23251fa" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-csi--node--driver--g7tkt-eth0" Mar 17 18:52:20.672892 env[1566]: 2025-03-17 18:52:20.668 [INFO][4620] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c952186de7390f865d8d07eb798aac8ce4b67164b86f2f0bc0e8c4d4b23251fa" HandleID="k8s-pod-network.c952186de7390f865d8d07eb798aac8ce4b67164b86f2f0bc0e8c4d4b23251fa" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-csi--node--driver--g7tkt-eth0" Mar 17 18:52:20.672892 env[1566]: 2025-03-17 18:52:20.670 [INFO][4620] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:52:20.672892 env[1566]: 2025-03-17 18:52:20.671 [INFO][4614] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c952186de7390f865d8d07eb798aac8ce4b67164b86f2f0bc0e8c4d4b23251fa" Mar 17 18:52:20.675540 systemd[1]: run-netns-cni\x2d2c4cec33\x2d638f\x2ddbcb\x2de5c2\x2d6d5bf7403813.mount: Deactivated successfully. Mar 17 18:52:20.677317 env[1566]: time="2025-03-17T18:52:20.677275904Z" level=info msg="TearDown network for sandbox \"c952186de7390f865d8d07eb798aac8ce4b67164b86f2f0bc0e8c4d4b23251fa\" successfully" Mar 17 18:52:20.677403 env[1566]: time="2025-03-17T18:52:20.677387142Z" level=info msg="StopPodSandbox for \"c952186de7390f865d8d07eb798aac8ce4b67164b86f2f0bc0e8c4d4b23251fa\" returns successfully" Mar 17 18:52:20.679294 env[1566]: time="2025-03-17T18:52:20.679098348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g7tkt,Uid:b729951e-7fde-40b6-a0f1-675d7c66febf,Namespace:calico-system,Attempt:1,}" Mar 17 18:52:20.802086 systemd-networkd[1734]: calif4651768243: Gained IPv6LL Mar 17 18:52:20.874000 audit[4628]: NETFILTER_CFG table=filter:106 family=2 entries=12 op=nft_register_rule pid=4628 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:52:20.874000 audit[4628]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4420 a0=3 a1=ffffe2b43690 a2=0 a3=1 items=0 ppid=2898 pid=4628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:20.920033 kernel: audit: type=1325 audit(1742237540.874:318): table=filter:106 family=2 entries=12 op=nft_register_rule pid=4628 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:52:20.920161 kernel: audit: type=1300 audit(1742237540.874:318): arch=c00000b7 syscall=211 success=yes exit=4420 a0=3 a1=ffffe2b43690 a2=0 a3=1 items=0 ppid=2898 pid=4628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:20.874000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:52:20.935216 kernel: audit: type=1327 audit(1742237540.874:318): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:52:21.204652 kubelet[2763]: I0317 18:52:21.204531 2763 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 18:52:21.320000 audit[4628]: NETFILTER_CFG table=nat:107 family=2 entries=54 op=nft_register_chain pid=4628 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:52:21.320000 audit[4628]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19092 a0=3 a1=ffffe2b43690 a2=0 a3=1 items=0 ppid=2898 pid=4628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:21.343907 kernel: audit: type=1325 audit(1742237541.320:319): table=nat:107 family=2 entries=54 op=nft_register_chain pid=4628 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:52:21.320000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:52:21.415221 env[1566]: time="2025-03-17T18:52:21.415181660Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:52:21.424565 env[1566]: time="2025-03-17T18:52:21.424507518Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:52:21.431439 env[1566]: time="2025-03-17T18:52:21.430973472Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:52:21.438153 env[1566]: time="2025-03-17T18:52:21.438106733Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:52:21.438933 env[1566]: time="2025-03-17T18:52:21.438896998Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Mar 17 18:52:21.448386 env[1566]: time="2025-03-17T18:52:21.447513710Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Mar 17 18:52:21.463903 env[1566]: time="2025-03-17T18:52:21.463844992Z" level=info msg="CreateContainer within sandbox \"58da4f6e5fc27c8cee99933b7e9aaf356eed0af0485adfed2473be01b66d9934\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 17 18:52:21.536426 env[1566]: time="2025-03-17T18:52:21.536386018Z" level=info msg="StopPodSandbox for \"faca32c2f27d05cb4b0fa73fb116cda5009efcfdafd08b20b4d6363289d31b68\"" Mar 17 18:52:21.543531 env[1566]: time="2025-03-17T18:52:21.543475840Z" level=info msg="CreateContainer within sandbox \"58da4f6e5fc27c8cee99933b7e9aaf356eed0af0485adfed2473be01b66d9934\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d60265e2f70b477c70323683cfd358132cef3387c4df587a11663adfe63e9927\"" Mar 17 18:52:21.545317 env[1566]: time="2025-03-17T18:52:21.544804894Z" level=info msg="StartContainer for \"d60265e2f70b477c70323683cfd358132cef3387c4df587a11663adfe63e9927\"" Mar 17 18:52:21.615498 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Mar 17 18:52:21.615643 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): caliee17ef2878d: link becomes ready Mar 17 18:52:21.614517 systemd-networkd[1734]: caliee17ef2878d: Link UP Mar 17 18:52:21.623393 systemd-networkd[1734]: caliee17ef2878d: Gained carrier Mar 17 18:52:21.655131 env[1566]: 2025-03-17 18:52:21.456 [INFO][4649] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 18:52:21.655131 env[1566]: 2025-03-17 18:52:21.481 [INFO][4649] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.7--a--c36c8d7be6-k8s-csi--node--driver--g7tkt-eth0 csi-node-driver- calico-system b729951e-7fde-40b6-a0f1-675d7c66febf 800 0 2025-03-17 18:51:54 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-3510.3.7-a-c36c8d7be6 csi-node-driver-g7tkt eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliee17ef2878d [] []}} ContainerID="fc7a5eefd0f968ed45c4c6df476448e7be684518fe66501621ac0457db0be61b" Namespace="calico-system" Pod="csi-node-driver-g7tkt" WorkloadEndpoint="ci--3510.3.7--a--c36c8d7be6-k8s-csi--node--driver--g7tkt-" Mar 17 18:52:21.655131 env[1566]: 2025-03-17 18:52:21.482 [INFO][4649] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fc7a5eefd0f968ed45c4c6df476448e7be684518fe66501621ac0457db0be61b" Namespace="calico-system" Pod="csi-node-driver-g7tkt" WorkloadEndpoint="ci--3510.3.7--a--c36c8d7be6-k8s-csi--node--driver--g7tkt-eth0" Mar 17 18:52:21.655131 env[1566]: 2025-03-17 18:52:21.527 [INFO][4669] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fc7a5eefd0f968ed45c4c6df476448e7be684518fe66501621ac0457db0be61b" HandleID="k8s-pod-network.fc7a5eefd0f968ed45c4c6df476448e7be684518fe66501621ac0457db0be61b" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-csi--node--driver--g7tkt-eth0" Mar 17 18:52:21.655131 env[1566]: 2025-03-17 18:52:21.553 [INFO][4669] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fc7a5eefd0f968ed45c4c6df476448e7be684518fe66501621ac0457db0be61b" HandleID="k8s-pod-network.fc7a5eefd0f968ed45c4c6df476448e7be684518fe66501621ac0457db0be61b" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-csi--node--driver--g7tkt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000318ab0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.7-a-c36c8d7be6", "pod":"csi-node-driver-g7tkt", "timestamp":"2025-03-17 18:52:21.5270422 +0000 UTC"}, Hostname:"ci-3510.3.7-a-c36c8d7be6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 18:52:21.655131 env[1566]: 2025-03-17 18:52:21.553 [INFO][4669] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:52:21.655131 env[1566]: 2025-03-17 18:52:21.553 [INFO][4669] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:52:21.655131 env[1566]: 2025-03-17 18:52:21.553 [INFO][4669] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.7-a-c36c8d7be6' Mar 17 18:52:21.655131 env[1566]: 2025-03-17 18:52:21.555 [INFO][4669] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fc7a5eefd0f968ed45c4c6df476448e7be684518fe66501621ac0457db0be61b" host="ci-3510.3.7-a-c36c8d7be6" Mar 17 18:52:21.655131 env[1566]: 2025-03-17 18:52:21.559 [INFO][4669] ipam/ipam.go 372: Looking up existing affinities for host host="ci-3510.3.7-a-c36c8d7be6" Mar 17 18:52:21.655131 env[1566]: 2025-03-17 18:52:21.570 [INFO][4669] ipam/ipam.go 489: Trying affinity for 192.168.10.128/26 host="ci-3510.3.7-a-c36c8d7be6" Mar 17 18:52:21.655131 env[1566]: 2025-03-17 18:52:21.573 [INFO][4669] ipam/ipam.go 155: Attempting to load block cidr=192.168.10.128/26 host="ci-3510.3.7-a-c36c8d7be6" Mar 17 18:52:21.655131 env[1566]: 2025-03-17 18:52:21.575 [INFO][4669] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.10.128/26 host="ci-3510.3.7-a-c36c8d7be6" Mar 17 18:52:21.655131 env[1566]: 2025-03-17 18:52:21.576 [INFO][4669] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.10.128/26 handle="k8s-pod-network.fc7a5eefd0f968ed45c4c6df476448e7be684518fe66501621ac0457db0be61b" host="ci-3510.3.7-a-c36c8d7be6" Mar 17 18:52:21.655131 env[1566]: 2025-03-17 18:52:21.578 [INFO][4669] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.fc7a5eefd0f968ed45c4c6df476448e7be684518fe66501621ac0457db0be61b Mar 17 18:52:21.655131 env[1566]: 2025-03-17 18:52:21.585 [INFO][4669] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.10.128/26 handle="k8s-pod-network.fc7a5eefd0f968ed45c4c6df476448e7be684518fe66501621ac0457db0be61b" host="ci-3510.3.7-a-c36c8d7be6" Mar 17 18:52:21.655131 env[1566]: 2025-03-17 18:52:21.601 [INFO][4669] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.10.133/26] block=192.168.10.128/26 handle="k8s-pod-network.fc7a5eefd0f968ed45c4c6df476448e7be684518fe66501621ac0457db0be61b" host="ci-3510.3.7-a-c36c8d7be6" Mar 17 18:52:21.655131 env[1566]: 2025-03-17 18:52:21.601 [INFO][4669] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.10.133/26] handle="k8s-pod-network.fc7a5eefd0f968ed45c4c6df476448e7be684518fe66501621ac0457db0be61b" host="ci-3510.3.7-a-c36c8d7be6" Mar 17 18:52:21.655131 env[1566]: 2025-03-17 18:52:21.601 [INFO][4669] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:52:21.655131 env[1566]: 2025-03-17 18:52:21.601 [INFO][4669] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.10.133/26] IPv6=[] ContainerID="fc7a5eefd0f968ed45c4c6df476448e7be684518fe66501621ac0457db0be61b" HandleID="k8s-pod-network.fc7a5eefd0f968ed45c4c6df476448e7be684518fe66501621ac0457db0be61b" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-csi--node--driver--g7tkt-eth0" Mar 17 18:52:21.655763 env[1566]: 2025-03-17 18:52:21.603 [INFO][4649] cni-plugin/k8s.go 386: Populated endpoint ContainerID="fc7a5eefd0f968ed45c4c6df476448e7be684518fe66501621ac0457db0be61b" Namespace="calico-system" Pod="csi-node-driver-g7tkt" WorkloadEndpoint="ci--3510.3.7--a--c36c8d7be6-k8s-csi--node--driver--g7tkt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--a--c36c8d7be6-k8s-csi--node--driver--g7tkt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b729951e-7fde-40b6-a0f1-675d7c66febf", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 51, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-a-c36c8d7be6", ContainerID:"", Pod:"csi-node-driver-g7tkt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.10.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliee17ef2878d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:52:21.655763 env[1566]: 2025-03-17 18:52:21.604 [INFO][4649] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.10.133/32] ContainerID="fc7a5eefd0f968ed45c4c6df476448e7be684518fe66501621ac0457db0be61b" Namespace="calico-system" Pod="csi-node-driver-g7tkt" WorkloadEndpoint="ci--3510.3.7--a--c36c8d7be6-k8s-csi--node--driver--g7tkt-eth0" Mar 17 18:52:21.655763 env[1566]: 2025-03-17 18:52:21.604 [INFO][4649] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliee17ef2878d ContainerID="fc7a5eefd0f968ed45c4c6df476448e7be684518fe66501621ac0457db0be61b" Namespace="calico-system" Pod="csi-node-driver-g7tkt" WorkloadEndpoint="ci--3510.3.7--a--c36c8d7be6-k8s-csi--node--driver--g7tkt-eth0" Mar 17 18:52:21.655763 env[1566]: 2025-03-17 18:52:21.624 [INFO][4649] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fc7a5eefd0f968ed45c4c6df476448e7be684518fe66501621ac0457db0be61b" Namespace="calico-system" Pod="csi-node-driver-g7tkt" WorkloadEndpoint="ci--3510.3.7--a--c36c8d7be6-k8s-csi--node--driver--g7tkt-eth0" Mar 17 18:52:21.655763 env[1566]: 2025-03-17 18:52:21.624 [INFO][4649] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fc7a5eefd0f968ed45c4c6df476448e7be684518fe66501621ac0457db0be61b" Namespace="calico-system" Pod="csi-node-driver-g7tkt" WorkloadEndpoint="ci--3510.3.7--a--c36c8d7be6-k8s-csi--node--driver--g7tkt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--a--c36c8d7be6-k8s-csi--node--driver--g7tkt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b729951e-7fde-40b6-a0f1-675d7c66febf", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 51, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-a-c36c8d7be6", ContainerID:"fc7a5eefd0f968ed45c4c6df476448e7be684518fe66501621ac0457db0be61b", Pod:"csi-node-driver-g7tkt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.10.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliee17ef2878d", MAC:"5e:7e:26:a6:b7:ee", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:52:21.655763 env[1566]: 2025-03-17 18:52:21.651 [INFO][4649] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="fc7a5eefd0f968ed45c4c6df476448e7be684518fe66501621ac0457db0be61b" Namespace="calico-system" Pod="csi-node-driver-g7tkt" WorkloadEndpoint="ci--3510.3.7--a--c36c8d7be6-k8s-csi--node--driver--g7tkt-eth0" Mar 17 18:52:21.704620 env[1566]: time="2025-03-17T18:52:21.704570502Z" level=info msg="StartContainer for \"d60265e2f70b477c70323683cfd358132cef3387c4df587a11663adfe63e9927\" returns successfully" Mar 17 18:52:21.707796 env[1566]: time="2025-03-17T18:52:21.705854237Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:52:21.707796 env[1566]: time="2025-03-17T18:52:21.705929915Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:52:21.707796 env[1566]: time="2025-03-17T18:52:21.705940555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:52:21.707796 env[1566]: time="2025-03-17T18:52:21.706070312Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fc7a5eefd0f968ed45c4c6df476448e7be684518fe66501621ac0457db0be61b pid=4761 runtime=io.containerd.runc.v2 Mar 17 18:52:21.754423 env[1566]: 2025-03-17 18:52:21.679 [INFO][4718] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="faca32c2f27d05cb4b0fa73fb116cda5009efcfdafd08b20b4d6363289d31b68" Mar 17 18:52:21.754423 env[1566]: 2025-03-17 18:52:21.684 [INFO][4718] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="faca32c2f27d05cb4b0fa73fb116cda5009efcfdafd08b20b4d6363289d31b68" iface="eth0" netns="/var/run/netns/cni-d989042c-5dc4-7b71-6b42-487ac132d7dc" Mar 17 18:52:21.754423 env[1566]: 2025-03-17 18:52:21.685 [INFO][4718] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="faca32c2f27d05cb4b0fa73fb116cda5009efcfdafd08b20b4d6363289d31b68" iface="eth0" netns="/var/run/netns/cni-d989042c-5dc4-7b71-6b42-487ac132d7dc" Mar 17 18:52:21.754423 env[1566]: 2025-03-17 18:52:21.685 [INFO][4718] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="faca32c2f27d05cb4b0fa73fb116cda5009efcfdafd08b20b4d6363289d31b68" iface="eth0" netns="/var/run/netns/cni-d989042c-5dc4-7b71-6b42-487ac132d7dc" Mar 17 18:52:21.754423 env[1566]: 2025-03-17 18:52:21.685 [INFO][4718] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="faca32c2f27d05cb4b0fa73fb116cda5009efcfdafd08b20b4d6363289d31b68" Mar 17 18:52:21.754423 env[1566]: 2025-03-17 18:52:21.685 [INFO][4718] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="faca32c2f27d05cb4b0fa73fb116cda5009efcfdafd08b20b4d6363289d31b68" Mar 17 18:52:21.754423 env[1566]: 2025-03-17 18:52:21.736 [INFO][4745] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="faca32c2f27d05cb4b0fa73fb116cda5009efcfdafd08b20b4d6363289d31b68" HandleID="k8s-pod-network.faca32c2f27d05cb4b0fa73fb116cda5009efcfdafd08b20b4d6363289d31b68" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-calico--apiserver--7b85fdb584--4542j-eth0" Mar 17 18:52:21.754423 env[1566]: 2025-03-17 18:52:21.736 [INFO][4745] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:52:21.754423 env[1566]: 2025-03-17 18:52:21.736 [INFO][4745] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:52:21.754423 env[1566]: 2025-03-17 18:52:21.749 [WARNING][4745] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="faca32c2f27d05cb4b0fa73fb116cda5009efcfdafd08b20b4d6363289d31b68" HandleID="k8s-pod-network.faca32c2f27d05cb4b0fa73fb116cda5009efcfdafd08b20b4d6363289d31b68" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-calico--apiserver--7b85fdb584--4542j-eth0" Mar 17 18:52:21.754423 env[1566]: 2025-03-17 18:52:21.749 [INFO][4745] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="faca32c2f27d05cb4b0fa73fb116cda5009efcfdafd08b20b4d6363289d31b68" HandleID="k8s-pod-network.faca32c2f27d05cb4b0fa73fb116cda5009efcfdafd08b20b4d6363289d31b68" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-calico--apiserver--7b85fdb584--4542j-eth0" Mar 17 18:52:21.754423 env[1566]: 2025-03-17 18:52:21.751 [INFO][4745] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:52:21.754423 env[1566]: 2025-03-17 18:52:21.753 [INFO][4718] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="faca32c2f27d05cb4b0fa73fb116cda5009efcfdafd08b20b4d6363289d31b68" Mar 17 18:52:21.758322 systemd[1]: run-netns-cni\x2dd989042c\x2d5dc4\x2d7b71\x2d6b42\x2d487ac132d7dc.mount: Deactivated successfully. Mar 17 18:52:21.764584 env[1566]: time="2025-03-17T18:52:21.764538893Z" level=info msg="TearDown network for sandbox \"faca32c2f27d05cb4b0fa73fb116cda5009efcfdafd08b20b4d6363289d31b68\" successfully" Mar 17 18:52:21.764584 env[1566]: time="2025-03-17T18:52:21.764576093Z" level=info msg="StopPodSandbox for \"faca32c2f27d05cb4b0fa73fb116cda5009efcfdafd08b20b4d6363289d31b68\" returns successfully" Mar 17 18:52:21.766985 env[1566]: time="2025-03-17T18:52:21.766942567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b85fdb584-4542j,Uid:3da68d2e-dbd2-4641-b6b6-c1e92b514d10,Namespace:calico-apiserver,Attempt:1,}" Mar 17 18:52:21.832490 kubelet[2763]: E0317 18:52:21.832449 2763 cadvisor_stats_provider.go:500] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods/besteffort/podb729951e-7fde-40b6-a0f1-675d7c66febf/fc7a5eefd0f968ed45c4c6df476448e7be684518fe66501621ac0457db0be61b\": RecentStats: unable to find data in memory cache]" Mar 17 18:52:21.839309 env[1566]: time="2025-03-17T18:52:21.839259318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g7tkt,Uid:b729951e-7fde-40b6-a0f1-675d7c66febf,Namespace:calico-system,Attempt:1,} returns sandbox id \"fc7a5eefd0f968ed45c4c6df476448e7be684518fe66501621ac0457db0be61b\"" Mar 17 18:52:22.072943 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calie6a2862f721: link becomes ready Mar 17 18:52:22.073709 systemd-networkd[1734]: calie6a2862f721: Link UP Mar 17 18:52:22.073890 systemd-networkd[1734]: calie6a2862f721: Gained carrier Mar 17 18:52:22.078854 env[1566]: 2025-03-17 18:52:21.912 [INFO][4803] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 18:52:22.078854 env[1566]: 2025-03-17 18:52:21.932 [INFO][4803] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.7--a--c36c8d7be6-k8s-calico--apiserver--7b85fdb584--4542j-eth0 calico-apiserver-7b85fdb584- calico-apiserver 3da68d2e-dbd2-4641-b6b6-c1e92b514d10 815 0 2025-03-17 18:51:53 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7b85fdb584 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510.3.7-a-c36c8d7be6 calico-apiserver-7b85fdb584-4542j eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie6a2862f721 [] []}} ContainerID="646de5560a67ba278bca88b6c2c2fbda33965bab849cdbe6fea518258d9ee71d" Namespace="calico-apiserver" Pod="calico-apiserver-7b85fdb584-4542j" WorkloadEndpoint="ci--3510.3.7--a--c36c8d7be6-k8s-calico--apiserver--7b85fdb584--4542j-" Mar 17 18:52:22.078854 env[1566]: 2025-03-17 18:52:21.932 [INFO][4803] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="646de5560a67ba278bca88b6c2c2fbda33965bab849cdbe6fea518258d9ee71d" Namespace="calico-apiserver" Pod="calico-apiserver-7b85fdb584-4542j" WorkloadEndpoint="ci--3510.3.7--a--c36c8d7be6-k8s-calico--apiserver--7b85fdb584--4542j-eth0" Mar 17 18:52:22.078854 env[1566]: 2025-03-17 18:52:21.990 [INFO][4813] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="646de5560a67ba278bca88b6c2c2fbda33965bab849cdbe6fea518258d9ee71d" HandleID="k8s-pod-network.646de5560a67ba278bca88b6c2c2fbda33965bab849cdbe6fea518258d9ee71d" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-calico--apiserver--7b85fdb584--4542j-eth0" Mar 17 18:52:22.078854 env[1566]: 2025-03-17 18:52:22.006 [INFO][4813] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="646de5560a67ba278bca88b6c2c2fbda33965bab849cdbe6fea518258d9ee71d" HandleID="k8s-pod-network.646de5560a67ba278bca88b6c2c2fbda33965bab849cdbe6fea518258d9ee71d" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-calico--apiserver--7b85fdb584--4542j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000332c80), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510.3.7-a-c36c8d7be6", "pod":"calico-apiserver-7b85fdb584-4542j", "timestamp":"2025-03-17 18:52:21.990697007 +0000 UTC"}, Hostname:"ci-3510.3.7-a-c36c8d7be6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 18:52:22.078854 env[1566]: 2025-03-17 18:52:22.006 [INFO][4813] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:52:22.078854 env[1566]: 2025-03-17 18:52:22.006 [INFO][4813] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:52:22.078854 env[1566]: 2025-03-17 18:52:22.006 [INFO][4813] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.7-a-c36c8d7be6' Mar 17 18:52:22.078854 env[1566]: 2025-03-17 18:52:22.010 [INFO][4813] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.646de5560a67ba278bca88b6c2c2fbda33965bab849cdbe6fea518258d9ee71d" host="ci-3510.3.7-a-c36c8d7be6" Mar 17 18:52:22.078854 env[1566]: 2025-03-17 18:52:22.015 [INFO][4813] ipam/ipam.go 372: Looking up existing affinities for host host="ci-3510.3.7-a-c36c8d7be6" Mar 17 18:52:22.078854 env[1566]: 2025-03-17 18:52:22.020 [INFO][4813] ipam/ipam.go 489: Trying affinity for 192.168.10.128/26 host="ci-3510.3.7-a-c36c8d7be6" Mar 17 18:52:22.078854 env[1566]: 2025-03-17 18:52:22.023 [INFO][4813] ipam/ipam.go 155: Attempting to load block cidr=192.168.10.128/26 host="ci-3510.3.7-a-c36c8d7be6" Mar 17 18:52:22.078854 env[1566]: 2025-03-17 18:52:22.026 [INFO][4813] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.10.128/26 host="ci-3510.3.7-a-c36c8d7be6" Mar 17 18:52:22.078854 env[1566]: 2025-03-17 18:52:22.026 [INFO][4813] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.10.128/26 handle="k8s-pod-network.646de5560a67ba278bca88b6c2c2fbda33965bab849cdbe6fea518258d9ee71d" host="ci-3510.3.7-a-c36c8d7be6" Mar 17 18:52:22.078854 env[1566]: 2025-03-17 18:52:22.029 [INFO][4813] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.646de5560a67ba278bca88b6c2c2fbda33965bab849cdbe6fea518258d9ee71d Mar 17 18:52:22.078854 env[1566]: 2025-03-17 18:52:22.038 [INFO][4813] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.10.128/26 handle="k8s-pod-network.646de5560a67ba278bca88b6c2c2fbda33965bab849cdbe6fea518258d9ee71d" host="ci-3510.3.7-a-c36c8d7be6" Mar 17 18:52:22.078854 env[1566]: 2025-03-17 18:52:22.048 [INFO][4813] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.10.134/26] block=192.168.10.128/26 handle="k8s-pod-network.646de5560a67ba278bca88b6c2c2fbda33965bab849cdbe6fea518258d9ee71d" host="ci-3510.3.7-a-c36c8d7be6" Mar 17 18:52:22.078854 env[1566]: 2025-03-17 18:52:22.048 [INFO][4813] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.10.134/26] handle="k8s-pod-network.646de5560a67ba278bca88b6c2c2fbda33965bab849cdbe6fea518258d9ee71d" host="ci-3510.3.7-a-c36c8d7be6" Mar 17 18:52:22.078854 env[1566]: 2025-03-17 18:52:22.048 [INFO][4813] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:52:22.078854 env[1566]: 2025-03-17 18:52:22.048 [INFO][4813] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.10.134/26] IPv6=[] ContainerID="646de5560a67ba278bca88b6c2c2fbda33965bab849cdbe6fea518258d9ee71d" HandleID="k8s-pod-network.646de5560a67ba278bca88b6c2c2fbda33965bab849cdbe6fea518258d9ee71d" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-calico--apiserver--7b85fdb584--4542j-eth0" Mar 17 18:52:22.079489 env[1566]: 2025-03-17 18:52:22.051 [INFO][4803] cni-plugin/k8s.go 386: Populated endpoint ContainerID="646de5560a67ba278bca88b6c2c2fbda33965bab849cdbe6fea518258d9ee71d" Namespace="calico-apiserver" Pod="calico-apiserver-7b85fdb584-4542j" WorkloadEndpoint="ci--3510.3.7--a--c36c8d7be6-k8s-calico--apiserver--7b85fdb584--4542j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--a--c36c8d7be6-k8s-calico--apiserver--7b85fdb584--4542j-eth0", GenerateName:"calico-apiserver-7b85fdb584-", Namespace:"calico-apiserver", SelfLink:"", UID:"3da68d2e-dbd2-4641-b6b6-c1e92b514d10", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 51, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b85fdb584", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-a-c36c8d7be6", ContainerID:"", Pod:"calico-apiserver-7b85fdb584-4542j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.10.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie6a2862f721", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:52:22.079489 env[1566]: 2025-03-17 18:52:22.051 [INFO][4803] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.10.134/32] ContainerID="646de5560a67ba278bca88b6c2c2fbda33965bab849cdbe6fea518258d9ee71d" Namespace="calico-apiserver" Pod="calico-apiserver-7b85fdb584-4542j" WorkloadEndpoint="ci--3510.3.7--a--c36c8d7be6-k8s-calico--apiserver--7b85fdb584--4542j-eth0" Mar 17 18:52:22.079489 env[1566]: 2025-03-17 18:52:22.051 [INFO][4803] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie6a2862f721 ContainerID="646de5560a67ba278bca88b6c2c2fbda33965bab849cdbe6fea518258d9ee71d" Namespace="calico-apiserver" Pod="calico-apiserver-7b85fdb584-4542j" WorkloadEndpoint="ci--3510.3.7--a--c36c8d7be6-k8s-calico--apiserver--7b85fdb584--4542j-eth0" Mar 17 18:52:22.079489 env[1566]: 2025-03-17 18:52:22.052 [INFO][4803] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="646de5560a67ba278bca88b6c2c2fbda33965bab849cdbe6fea518258d9ee71d" Namespace="calico-apiserver" Pod="calico-apiserver-7b85fdb584-4542j" WorkloadEndpoint="ci--3510.3.7--a--c36c8d7be6-k8s-calico--apiserver--7b85fdb584--4542j-eth0" Mar 17 18:52:22.079489 env[1566]: 2025-03-17 18:52:22.052 [INFO][4803] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="646de5560a67ba278bca88b6c2c2fbda33965bab849cdbe6fea518258d9ee71d" Namespace="calico-apiserver" Pod="calico-apiserver-7b85fdb584-4542j" WorkloadEndpoint="ci--3510.3.7--a--c36c8d7be6-k8s-calico--apiserver--7b85fdb584--4542j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--a--c36c8d7be6-k8s-calico--apiserver--7b85fdb584--4542j-eth0", GenerateName:"calico-apiserver-7b85fdb584-", Namespace:"calico-apiserver", SelfLink:"", UID:"3da68d2e-dbd2-4641-b6b6-c1e92b514d10", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 51, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b85fdb584", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-a-c36c8d7be6", ContainerID:"646de5560a67ba278bca88b6c2c2fbda33965bab849cdbe6fea518258d9ee71d", Pod:"calico-apiserver-7b85fdb584-4542j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.10.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie6a2862f721", MAC:"52:09:5b:3d:ef:d8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:52:22.079489 env[1566]: 2025-03-17 18:52:22.072 [INFO][4803] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="646de5560a67ba278bca88b6c2c2fbda33965bab849cdbe6fea518258d9ee71d" Namespace="calico-apiserver" Pod="calico-apiserver-7b85fdb584-4542j" WorkloadEndpoint="ci--3510.3.7--a--c36c8d7be6-k8s-calico--apiserver--7b85fdb584--4542j-eth0" Mar 17 18:52:22.122045 env[1566]: time="2025-03-17T18:52:22.121803638Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:52:22.122045 env[1566]: time="2025-03-17T18:52:22.121848317Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:52:22.122045 env[1566]: time="2025-03-17T18:52:22.121859077Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:52:22.122369 env[1566]: time="2025-03-17T18:52:22.122302068Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/646de5560a67ba278bca88b6c2c2fbda33965bab849cdbe6fea518258d9ee71d pid=4844 runtime=io.containerd.runc.v2 Mar 17 18:52:22.193700 env[1566]: time="2025-03-17T18:52:22.193645653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b85fdb584-4542j,Uid:3da68d2e-dbd2-4641-b6b6-c1e92b514d10,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"646de5560a67ba278bca88b6c2c2fbda33965bab849cdbe6fea518258d9ee71d\"" Mar 17 18:52:22.198000 audit[4887]: AVC avc: denied { bpf } for pid=4887 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.199730 env[1566]: time="2025-03-17T18:52:22.199670777Z" level=info msg="CreateContainer within sandbox \"646de5560a67ba278bca88b6c2c2fbda33965bab849cdbe6fea518258d9ee71d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 17 18:52:22.198000 audit[4887]: AVC avc: denied { bpf } for pid=4887 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.198000 audit[4887]: AVC avc: denied { perfmon } for pid=4887 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.198000 audit[4887]: AVC avc: denied { perfmon } for pid=4887 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.198000 audit[4887]: AVC avc: denied { perfmon } for pid=4887 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.198000 audit[4887]: AVC avc: denied { perfmon } for pid=4887 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.198000 audit[4887]: AVC avc: denied { perfmon } for pid=4887 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.198000 audit[4887]: AVC avc: denied { bpf } for pid=4887 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.198000 audit[4887]: AVC avc: denied { bpf } for pid=4887 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.198000 audit: BPF prog-id=10 op=LOAD Mar 17 18:52:22.198000 audit[4887]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd7c9ccc8 a2=98 a3=ffffd7c9ccb8 items=0 ppid=4817 pid=4887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:22.198000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:52:22.200000 audit: BPF prog-id=10 op=UNLOAD Mar 17 18:52:22.201000 audit[4887]: AVC avc: denied { bpf } for pid=4887 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.201000 audit[4887]: AVC avc: denied { bpf } for pid=4887 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.201000 audit[4887]: AVC avc: denied { perfmon } for pid=4887 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.201000 audit[4887]: AVC avc: denied { perfmon } for pid=4887 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.201000 audit[4887]: AVC avc: denied { perfmon } for pid=4887 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.201000 audit[4887]: AVC avc: denied { perfmon } for pid=4887 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.201000 audit[4887]: AVC avc: denied { perfmon } for pid=4887 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.201000 audit[4887]: AVC avc: denied { bpf } for pid=4887 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.201000 audit[4887]: AVC avc: denied { bpf } for pid=4887 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.201000 audit: BPF prog-id=11 op=LOAD Mar 17 18:52:22.201000 audit[4887]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd7c9c958 a2=74 a3=95 items=0 ppid=4817 pid=4887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:22.201000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:52:22.202000 audit: BPF prog-id=11 op=UNLOAD Mar 17 18:52:22.202000 audit[4887]: AVC avc: denied { bpf } for pid=4887 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.202000 audit[4887]: AVC avc: denied { bpf } for pid=4887 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.202000 audit[4887]: AVC avc: denied { perfmon } for pid=4887 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.202000 audit[4887]: AVC avc: denied { perfmon } for pid=4887 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.202000 audit[4887]: AVC avc: denied { perfmon } for pid=4887 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.202000 audit[4887]: AVC avc: denied { perfmon } for pid=4887 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.202000 audit[4887]: AVC avc: denied { perfmon } for pid=4887 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.202000 audit[4887]: AVC avc: denied { bpf } for pid=4887 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.202000 audit[4887]: AVC avc: denied { bpf } for pid=4887 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.202000 audit: BPF prog-id=12 op=LOAD Mar 17 18:52:22.202000 audit[4887]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd7c9c9b8 a2=94 a3=2 items=0 ppid=4817 pid=4887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:22.202000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:52:22.202000 audit: BPF prog-id=12 op=UNLOAD Mar 17 18:52:22.279192 env[1566]: time="2025-03-17T18:52:22.279131205Z" level=info msg="CreateContainer within sandbox \"646de5560a67ba278bca88b6c2c2fbda33965bab849cdbe6fea518258d9ee71d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"460e7920518dd31b162b0cea50dec5fdeaab4286e88739afb158701efe83ff23\"" Mar 17 18:52:22.280099 env[1566]: time="2025-03-17T18:52:22.280069987Z" level=info msg="StartContainer for \"460e7920518dd31b162b0cea50dec5fdeaab4286e88739afb158701efe83ff23\"" Mar 17 18:52:22.355000 audit[4922]: NETFILTER_CFG table=filter:108 family=2 entries=11 op=nft_register_rule pid=4922 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:52:22.355000 audit[4922]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3676 a0=3 a1=ffffc00225c0 a2=0 a3=1 items=0 ppid=2898 pid=4922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:22.355000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:52:22.360000 audit[4922]: NETFILTER_CFG table=nat:109 family=2 entries=25 op=nft_register_chain pid=4922 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:52:22.360000 audit[4922]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8580 a0=3 a1=ffffc00225c0 a2=0 a3=1 items=0 ppid=2898 pid=4922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:22.360000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:52:22.383071 env[1566]: time="2025-03-17T18:52:22.383015322Z" level=info msg="StartContainer for \"460e7920518dd31b162b0cea50dec5fdeaab4286e88739afb158701efe83ff23\" returns successfully" Mar 17 18:52:22.472000 audit[4887]: AVC avc: denied { bpf } for pid=4887 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.472000 audit[4887]: AVC avc: denied { bpf } for pid=4887 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.472000 audit[4887]: AVC avc: denied { perfmon } for pid=4887 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.472000 audit[4887]: AVC avc: denied { perfmon } for pid=4887 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.472000 audit[4887]: AVC avc: denied { perfmon } for pid=4887 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.472000 audit[4887]: AVC avc: denied { perfmon } for pid=4887 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.472000 audit[4887]: AVC avc: denied { perfmon } for pid=4887 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.472000 audit[4887]: AVC avc: denied { bpf } for pid=4887 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.472000 audit[4887]: AVC avc: denied { bpf } for pid=4887 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.472000 audit: BPF prog-id=13 op=LOAD Mar 17 18:52:22.472000 audit[4887]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd7c9c978 a2=40 a3=ffffd7c9c9a8 items=0 ppid=4817 pid=4887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:22.472000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:52:22.483000 audit: BPF prog-id=13 op=UNLOAD Mar 17 18:52:22.483000 audit[4887]: AVC avc: denied { perfmon } for pid=4887 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.483000 audit[4887]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=0 a1=ffffd7c9ca90 a2=50 a3=0 items=0 ppid=4817 pid=4887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:22.483000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:52:22.566000 audit[4887]: AVC avc: denied { bpf } for pid=4887 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.566000 audit[4887]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffd7c9c9e8 a2=28 a3=ffffd7c9cb18 items=0 ppid=4817 pid=4887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:22.566000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:52:22.588000 audit[4887]: AVC avc: denied { bpf } for pid=4887 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.588000 audit[4887]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffd7c9ca18 a2=28 a3=ffffd7c9cb48 items=0 ppid=4817 pid=4887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:22.588000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:52:22.588000 audit[4887]: AVC avc: denied { bpf } for pid=4887 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.588000 audit[4887]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffd7c9c8c8 a2=28 a3=ffffd7c9c9f8 items=0 ppid=4817 pid=4887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:22.588000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:52:22.588000 audit[4887]: AVC avc: denied { bpf } for pid=4887 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.588000 audit[4887]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffd7c9ca38 a2=28 a3=ffffd7c9cb68 items=0 ppid=4817 pid=4887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:22.588000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:52:22.594000 audit[4887]: AVC avc: denied { bpf } for pid=4887 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.594000 audit[4887]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffd7c9ca18 a2=28 a3=ffffd7c9cb48 items=0 ppid=4817 pid=4887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:22.594000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:52:22.594000 audit[4887]: AVC avc: denied { bpf } for pid=4887 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.594000 audit[4887]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffd7c9ca08 a2=28 a3=ffffd7c9cb38 items=0 ppid=4817 pid=4887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:22.594000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:52:22.594000 audit[4887]: AVC avc: denied { bpf } for pid=4887 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.594000 audit[4887]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffd7c9ca38 a2=28 a3=ffffd7c9cb68 items=0 ppid=4817 pid=4887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:22.594000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:52:22.600000 audit[4887]: AVC avc: denied { bpf } for pid=4887 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.600000 audit[4887]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffd7c9ca18 a2=28 a3=ffffd7c9cb48 items=0 ppid=4817 pid=4887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:22.600000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:52:22.601000 audit[4887]: AVC avc: denied { bpf } for pid=4887 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.601000 audit[4887]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffd7c9ca38 a2=28 a3=ffffd7c9cb68 items=0 ppid=4817 pid=4887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:22.601000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:52:22.601000 audit[4887]: AVC avc: denied { bpf } for pid=4887 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.601000 audit[4887]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffd7c9ca08 a2=28 a3=ffffd7c9cb38 items=0 ppid=4817 pid=4887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:22.601000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:52:22.601000 audit[4887]: AVC avc: denied { bpf } for pid=4887 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.601000 audit[4887]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffd7c9ca88 a2=28 a3=ffffd7c9cbc8 items=0 ppid=4817 pid=4887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:22.601000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:52:22.602000 audit[4887]: AVC avc: denied { perfmon } for pid=4887 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.602000 audit[4887]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffd7c9c7c0 a2=50 a3=0 items=0 ppid=4817 pid=4887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:22.602000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:52:22.603000 audit[4887]: AVC avc: denied { bpf } for pid=4887 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.603000 audit[4887]: AVC avc: denied { bpf } for pid=4887 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.603000 audit[4887]: AVC avc: denied { perfmon } for pid=4887 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.603000 audit[4887]: AVC avc: denied { perfmon } for pid=4887 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.603000 audit[4887]: AVC avc: denied { perfmon } for pid=4887 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.603000 audit[4887]: AVC avc: denied { perfmon } for pid=4887 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.603000 audit[4887]: AVC avc: denied { perfmon } for pid=4887 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.603000 audit[4887]: AVC avc: denied { bpf } for pid=4887 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.603000 audit[4887]: AVC avc: denied { bpf } for pid=4887 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.603000 audit: BPF prog-id=14 op=LOAD Mar 17 18:52:22.603000 audit[4887]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd7c9c7c8 a2=94 a3=5 items=0 ppid=4817 pid=4887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:22.603000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:52:22.604000 audit: BPF prog-id=14 op=UNLOAD Mar 17 18:52:22.604000 audit[4887]: AVC avc: denied { perfmon } for pid=4887 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.604000 audit[4887]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffd7c9c8d0 a2=50 a3=0 items=0 ppid=4817 pid=4887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:22.604000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:52:22.604000 audit[4887]: AVC avc: denied { bpf } for pid=4887 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.604000 audit[4887]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=16 a1=ffffd7c9ca18 a2=4 a3=3 items=0 ppid=4817 pid=4887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:22.604000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:52:22.605000 audit[4887]: AVC avc: denied { bpf } for pid=4887 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.605000 audit[4887]: AVC avc: denied { bpf } for pid=4887 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.605000 audit[4887]: AVC avc: denied { perfmon } for pid=4887 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.605000 audit[4887]: AVC avc: denied { bpf } for pid=4887 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.605000 audit[4887]: AVC avc: denied { perfmon } for pid=4887 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.605000 audit[4887]: AVC avc: denied { perfmon } for pid=4887 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.605000 audit[4887]: AVC avc: denied { perfmon } for pid=4887 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.605000 audit[4887]: AVC avc: denied { perfmon } for pid=4887 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.605000 audit[4887]: AVC avc: denied { perfmon } for pid=4887 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.605000 audit[4887]: AVC avc: denied { bpf } for pid=4887 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.605000 audit[4887]: AVC avc: denied { confidentiality } for pid=4887 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Mar 17 18:52:22.605000 audit[4887]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffd7c9c9f8 a2=94 a3=6 items=0 ppid=4817 pid=4887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:22.605000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:52:22.612000 audit[4887]: AVC avc: denied { bpf } for pid=4887 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.612000 audit[4887]: AVC avc: denied { bpf } for pid=4887 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.612000 audit[4887]: AVC avc: denied { perfmon } for pid=4887 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.612000 audit[4887]: AVC avc: denied { bpf } for pid=4887 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.612000 audit[4887]: AVC avc: denied { perfmon } for pid=4887 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.612000 audit[4887]: AVC avc: denied { perfmon } for pid=4887 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.612000 audit[4887]: AVC avc: denied { perfmon } for pid=4887 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.612000 audit[4887]: AVC avc: denied { perfmon } for pid=4887 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.612000 audit[4887]: AVC avc: denied { perfmon } for pid=4887 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.612000 audit[4887]: AVC avc: denied { bpf } for pid=4887 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.612000 audit[4887]: AVC avc: denied { confidentiality } for pid=4887 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Mar 17 18:52:22.612000 audit[4887]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffd7c9c1c8 a2=94 a3=83 items=0 ppid=4817 pid=4887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:22.612000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:52:22.613000 audit[4887]: AVC avc: denied { bpf } for pid=4887 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.613000 audit[4887]: AVC avc: denied { bpf } for pid=4887 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.613000 audit[4887]: AVC avc: denied { perfmon } for pid=4887 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.613000 audit[4887]: AVC avc: denied { bpf } for pid=4887 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.613000 audit[4887]: AVC avc: denied { perfmon } for pid=4887 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.613000 audit[4887]: AVC avc: denied { perfmon } for pid=4887 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.613000 audit[4887]: AVC avc: denied { perfmon } for pid=4887 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.613000 audit[4887]: AVC avc: denied { perfmon } for pid=4887 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.613000 audit[4887]: AVC avc: denied { perfmon } for pid=4887 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.613000 audit[4887]: AVC avc: denied { bpf } for pid=4887 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.613000 audit[4887]: AVC avc: denied { confidentiality } for pid=4887 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Mar 17 18:52:22.613000 audit[4887]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffd7c9c1c8 a2=94 a3=83 items=0 ppid=4817 pid=4887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:22.613000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:52:22.648000 audit[4943]: AVC avc: denied { bpf } for pid=4943 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.648000 audit[4943]: AVC avc: denied { bpf } for pid=4943 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.648000 audit[4943]: AVC avc: denied { perfmon } for pid=4943 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.648000 audit[4943]: AVC avc: denied { perfmon } for pid=4943 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.648000 audit[4943]: AVC avc: denied { perfmon } for pid=4943 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.648000 audit[4943]: AVC avc: denied { perfmon } for pid=4943 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.648000 audit[4943]: AVC avc: denied { perfmon } for pid=4943 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.648000 audit[4943]: AVC avc: denied { bpf } for pid=4943 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.648000 audit[4943]: AVC avc: denied { bpf } for pid=4943 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.648000 audit: BPF prog-id=15 op=LOAD Mar 17 18:52:22.648000 audit[4943]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffffa62a5c8 a2=98 a3=fffffa62a5b8 items=0 ppid=4817 pid=4943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:22.648000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Mar 17 18:52:22.649000 audit: BPF prog-id=15 op=UNLOAD Mar 17 18:52:22.649000 audit[4943]: AVC avc: denied { bpf } for pid=4943 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.649000 audit[4943]: AVC avc: denied { bpf } for pid=4943 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.649000 audit[4943]: AVC avc: denied { perfmon } for pid=4943 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.649000 audit[4943]: AVC avc: denied { perfmon } for pid=4943 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.649000 audit[4943]: AVC avc: denied { perfmon } for pid=4943 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.649000 audit[4943]: AVC avc: denied { perfmon } for pid=4943 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.649000 audit[4943]: AVC avc: denied { perfmon } for pid=4943 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.649000 audit[4943]: AVC avc: denied { bpf } for pid=4943 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.649000 audit[4943]: AVC avc: denied { bpf } for pid=4943 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.649000 audit: BPF prog-id=16 op=LOAD Mar 17 18:52:22.649000 audit[4943]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffffa62a478 a2=74 a3=95 items=0 ppid=4817 pid=4943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:22.649000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Mar 17 18:52:22.649000 audit: BPF prog-id=16 op=UNLOAD Mar 17 18:52:22.649000 audit[4943]: AVC avc: denied { bpf } for pid=4943 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.649000 audit[4943]: AVC avc: denied { bpf } for pid=4943 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.649000 audit[4943]: AVC avc: denied { perfmon } for pid=4943 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.649000 audit[4943]: AVC avc: denied { perfmon } for pid=4943 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.649000 audit[4943]: AVC avc: denied { perfmon } for pid=4943 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.649000 audit[4943]: AVC avc: denied { perfmon } for pid=4943 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.649000 audit[4943]: AVC avc: denied { perfmon } for pid=4943 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.649000 audit[4943]: AVC avc: denied { bpf } for pid=4943 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.649000 audit[4943]: AVC avc: denied { bpf } for pid=4943 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:22.649000 audit: BPF prog-id=17 op=LOAD Mar 17 18:52:22.649000 audit[4943]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffffa62a4a8 a2=40 a3=fffffa62a4d8 items=0 ppid=4817 pid=4943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:22.649000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Mar 17 18:52:22.649000 audit: BPF prog-id=17 op=UNLOAD Mar 17 18:52:22.679468 systemd[1]: run-containerd-runc-k8s.io-fc7a5eefd0f968ed45c4c6df476448e7be684518fe66501621ac0457db0be61b-runc.SjnCa9.mount: Deactivated successfully. Mar 17 18:52:22.746067 kubelet[2763]: I0317 18:52:22.745997 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7b85fdb584-4542j" podStartSLOduration=29.745976165 podStartE2EDuration="29.745976165s" podCreationTimestamp="2025-03-17 18:51:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:52:22.730750539 +0000 UTC m=+51.678954655" watchObservedRunningTime="2025-03-17 18:52:22.745976165 +0000 UTC m=+51.694180281" Mar 17 18:52:22.763000 audit[4955]: NETFILTER_CFG table=filter:110 family=2 entries=10 op=nft_register_rule pid=4955 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:52:22.763000 audit[4955]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3676 a0=3 a1=ffffc29d0810 a2=0 a3=1 items=0 ppid=2898 pid=4955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:22.763000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:52:22.767000 audit[4955]: NETFILTER_CFG table=nat:111 family=2 entries=20 op=nft_register_rule pid=4955 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:52:22.767000 audit[4955]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffc29d0810 a2=0 a3=1 items=0 ppid=2898 pid=4955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:22.767000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:52:22.855937 systemd-networkd[1734]: caliee17ef2878d: Gained IPv6LL Mar 17 18:52:22.934413 systemd-networkd[1734]: vxlan.calico: Link UP Mar 17 18:52:22.934420 systemd-networkd[1734]: vxlan.calico: Gained carrier Mar 17 18:52:23.015000 audit[4972]: AVC avc: denied { bpf } for pid=4972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.015000 audit[4972]: AVC avc: denied { bpf } for pid=4972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.015000 audit[4972]: AVC avc: denied { perfmon } for pid=4972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.015000 audit[4972]: AVC avc: denied { perfmon } for pid=4972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.015000 audit[4972]: AVC avc: denied { perfmon } for pid=4972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.015000 audit[4972]: AVC avc: denied { perfmon } for pid=4972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.015000 audit[4972]: AVC avc: denied { perfmon } for pid=4972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.015000 audit[4972]: AVC avc: denied { bpf } for pid=4972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.015000 audit[4972]: AVC avc: denied { bpf } for pid=4972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.015000 audit: BPF prog-id=18 op=LOAD Mar 17 18:52:23.015000 audit[4972]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffffb9e76b8 a2=98 a3=fffffb9e76a8 items=0 ppid=4817 pid=4972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:23.015000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:52:23.019000 audit: BPF prog-id=18 op=UNLOAD Mar 17 18:52:23.019000 audit[4972]: AVC avc: denied { bpf } for pid=4972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.019000 audit[4972]: AVC avc: denied { bpf } for pid=4972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.019000 audit[4972]: AVC avc: denied { perfmon } for pid=4972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.019000 audit[4972]: AVC avc: denied { perfmon } for pid=4972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.019000 audit[4972]: AVC avc: denied { perfmon } for pid=4972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.019000 audit[4972]: AVC avc: denied { perfmon } for pid=4972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.019000 audit[4972]: AVC avc: denied { perfmon } for pid=4972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.019000 audit[4972]: AVC avc: denied { bpf } for pid=4972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.019000 audit[4972]: AVC avc: denied { bpf } for pid=4972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.019000 audit: BPF prog-id=19 op=LOAD Mar 17 18:52:23.019000 audit[4972]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffffb9e7398 a2=74 a3=95 items=0 ppid=4817 pid=4972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:23.019000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:52:23.019000 audit: BPF prog-id=19 op=UNLOAD Mar 17 18:52:23.019000 audit[4972]: AVC avc: denied { bpf } for pid=4972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.019000 audit[4972]: AVC avc: denied { bpf } for pid=4972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.019000 audit[4972]: AVC avc: denied { perfmon } for pid=4972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.019000 audit[4972]: AVC avc: denied { perfmon } for pid=4972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.019000 audit[4972]: AVC avc: denied { perfmon } for pid=4972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.019000 audit[4972]: AVC avc: denied { perfmon } for pid=4972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.019000 audit[4972]: AVC avc: denied { perfmon } for pid=4972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.019000 audit[4972]: AVC avc: denied { bpf } for pid=4972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.019000 audit[4972]: AVC avc: denied { bpf } for pid=4972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.019000 audit: BPF prog-id=20 op=LOAD Mar 17 18:52:23.019000 audit[4972]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffffb9e73f8 a2=94 a3=2 items=0 ppid=4817 pid=4972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:23.019000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:52:23.021000 audit: BPF prog-id=20 op=UNLOAD Mar 17 18:52:23.021000 audit[4972]: AVC avc: denied { bpf } for pid=4972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.021000 audit[4972]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=fffffb9e7428 a2=28 a3=fffffb9e7558 items=0 ppid=4817 pid=4972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:23.021000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:52:23.021000 audit[4972]: AVC avc: denied { bpf } for pid=4972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.021000 audit[4972]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffffb9e7458 a2=28 a3=fffffb9e7588 items=0 ppid=4817 pid=4972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:23.021000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:52:23.021000 audit[4972]: AVC avc: denied { bpf } for pid=4972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.021000 audit[4972]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffffb9e7308 a2=28 a3=fffffb9e7438 items=0 ppid=4817 pid=4972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:23.021000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:52:23.021000 audit[4972]: AVC avc: denied { bpf } for pid=4972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.021000 audit[4972]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=fffffb9e7478 a2=28 a3=fffffb9e75a8 items=0 ppid=4817 pid=4972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:23.021000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:52:23.021000 audit[4972]: AVC avc: denied { bpf } for pid=4972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.021000 audit[4972]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=fffffb9e7458 a2=28 a3=fffffb9e7588 items=0 ppid=4817 pid=4972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:23.021000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:52:23.021000 audit[4972]: AVC avc: denied { bpf } for pid=4972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.021000 audit[4972]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=fffffb9e7448 a2=28 a3=fffffb9e7578 items=0 ppid=4817 pid=4972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:23.021000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:52:23.021000 audit[4972]: AVC avc: denied { bpf } for pid=4972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.021000 audit[4972]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=fffffb9e7478 a2=28 a3=fffffb9e75a8 items=0 ppid=4817 pid=4972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:23.021000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:52:23.021000 audit[4972]: AVC avc: denied { bpf } for pid=4972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.021000 audit[4972]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffffb9e7458 a2=28 a3=fffffb9e7588 items=0 ppid=4817 pid=4972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:23.021000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:52:23.021000 audit[4972]: AVC avc: denied { bpf } for pid=4972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.021000 audit[4972]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffffb9e7478 a2=28 a3=fffffb9e75a8 items=0 ppid=4817 pid=4972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:23.021000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:52:23.021000 audit[4972]: AVC avc: denied { bpf } for pid=4972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.021000 audit[4972]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffffb9e7448 a2=28 a3=fffffb9e7578 items=0 ppid=4817 pid=4972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:23.021000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:52:23.021000 audit[4972]: AVC avc: denied { bpf } for pid=4972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.021000 audit[4972]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=fffffb9e74c8 a2=28 a3=fffffb9e7608 items=0 ppid=4817 pid=4972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:23.021000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:52:23.021000 audit[4972]: AVC avc: denied { bpf } for pid=4972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.021000 audit[4972]: AVC avc: denied { bpf } for pid=4972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.021000 audit[4972]: AVC avc: denied { perfmon } for pid=4972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.021000 audit[4972]: AVC avc: denied { perfmon } for pid=4972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.021000 audit[4972]: AVC avc: denied { perfmon } for pid=4972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.021000 audit[4972]: AVC avc: denied { perfmon } for pid=4972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.021000 audit[4972]: AVC avc: denied { perfmon } for pid=4972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.021000 audit[4972]: AVC avc: denied { bpf } for pid=4972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.021000 audit[4972]: AVC avc: denied { bpf } for pid=4972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.021000 audit: BPF prog-id=21 op=LOAD Mar 17 18:52:23.021000 audit[4972]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffffb9e72e8 a2=40 a3=fffffb9e7318 items=0 ppid=4817 pid=4972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:23.021000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:52:23.022000 audit: BPF prog-id=21 op=UNLOAD Mar 17 18:52:23.022000 audit[4972]: AVC avc: denied { bpf } for pid=4972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.022000 audit[4972]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=0 a1=fffffb9e7310 a2=50 a3=0 items=0 ppid=4817 pid=4972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:23.022000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:52:23.022000 audit[4972]: AVC avc: denied { bpf } for pid=4972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.022000 audit[4972]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=0 a1=fffffb9e7310 a2=50 a3=0 items=0 ppid=4817 pid=4972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:23.022000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:52:23.022000 audit[4972]: AVC avc: denied { bpf } for pid=4972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.022000 audit[4972]: AVC avc: denied { bpf } for pid=4972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.022000 audit[4972]: AVC avc: denied { bpf } for pid=4972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.022000 audit[4972]: AVC avc: denied { perfmon } for pid=4972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.022000 audit[4972]: AVC avc: denied { perfmon } for pid=4972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.022000 audit[4972]: AVC avc: denied { perfmon } for pid=4972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.022000 audit[4972]: AVC avc: denied { perfmon } for pid=4972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.022000 audit[4972]: AVC avc: denied { perfmon } for pid=4972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.022000 audit[4972]: AVC avc: denied { bpf } for pid=4972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.022000 audit[4972]: AVC avc: denied { bpf } for pid=4972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.022000 audit: BPF prog-id=22 op=LOAD Mar 17 18:52:23.022000 audit[4972]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffffb9e6a78 a2=94 a3=2 items=0 ppid=4817 pid=4972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:23.022000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:52:23.022000 audit: BPF prog-id=22 op=UNLOAD Mar 17 18:52:23.022000 audit[4972]: AVC avc: denied { bpf } for pid=4972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.022000 audit[4972]: AVC avc: denied { bpf } for pid=4972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.022000 audit[4972]: AVC avc: denied { bpf } for pid=4972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.022000 audit[4972]: AVC avc: denied { perfmon } for pid=4972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.022000 audit[4972]: AVC avc: denied { perfmon } for pid=4972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.022000 audit[4972]: AVC avc: denied { perfmon } for pid=4972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.022000 audit[4972]: AVC avc: denied { perfmon } for pid=4972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.022000 audit[4972]: AVC avc: denied { perfmon } for pid=4972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.022000 audit[4972]: AVC avc: denied { bpf } for pid=4972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.022000 audit[4972]: AVC avc: denied { bpf } for pid=4972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.022000 audit: BPF prog-id=23 op=LOAD Mar 17 18:52:23.022000 audit[4972]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffffb9e6c08 a2=94 a3=2d items=0 ppid=4817 pid=4972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:23.022000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:52:23.029000 audit[4977]: AVC avc: denied { bpf } for pid=4977 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.029000 audit[4977]: AVC avc: denied { bpf } for pid=4977 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.029000 audit[4977]: AVC avc: denied { perfmon } for pid=4977 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.029000 audit[4977]: AVC avc: denied { perfmon } for pid=4977 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.029000 audit[4977]: AVC avc: denied { perfmon } for pid=4977 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.029000 audit[4977]: AVC avc: denied { perfmon } for pid=4977 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.029000 audit[4977]: AVC avc: denied { perfmon } for pid=4977 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.029000 audit[4977]: AVC avc: denied { bpf } for pid=4977 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.029000 audit[4977]: AVC avc: denied { bpf } for pid=4977 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.029000 audit: BPF prog-id=24 op=LOAD Mar 17 18:52:23.029000 audit[4977]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd679e448 a2=98 a3=ffffd679e438 items=0 ppid=4817 pid=4977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:23.029000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:52:23.029000 audit: BPF prog-id=24 op=UNLOAD Mar 17 18:52:23.030000 audit[4977]: AVC avc: denied { bpf } for pid=4977 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.030000 audit[4977]: AVC avc: denied { bpf } for pid=4977 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.030000 audit[4977]: AVC avc: denied { perfmon } for pid=4977 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.030000 audit[4977]: AVC avc: denied { perfmon } for pid=4977 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.030000 audit[4977]: AVC avc: denied { perfmon } for pid=4977 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.030000 audit[4977]: AVC avc: denied { perfmon } for pid=4977 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.030000 audit[4977]: AVC avc: denied { perfmon } for pid=4977 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.030000 audit[4977]: AVC avc: denied { bpf } for pid=4977 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.030000 audit[4977]: AVC avc: denied { bpf } for pid=4977 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.030000 audit: BPF prog-id=25 op=LOAD Mar 17 18:52:23.030000 audit[4977]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd679e0d8 a2=74 a3=95 items=0 ppid=4817 pid=4977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:23.030000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:52:23.030000 audit: BPF prog-id=25 op=UNLOAD Mar 17 18:52:23.030000 audit[4977]: AVC avc: denied { bpf } for pid=4977 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.030000 audit[4977]: AVC avc: denied { bpf } for pid=4977 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.030000 audit[4977]: AVC avc: denied { perfmon } for pid=4977 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.030000 audit[4977]: AVC avc: denied { perfmon } for pid=4977 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.030000 audit[4977]: AVC avc: denied { perfmon } for pid=4977 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.030000 audit[4977]: AVC avc: denied { perfmon } for pid=4977 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.030000 audit[4977]: AVC avc: denied { perfmon } for pid=4977 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.030000 audit[4977]: AVC avc: denied { bpf } for pid=4977 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.030000 audit[4977]: AVC avc: denied { bpf } for pid=4977 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.030000 audit: BPF prog-id=26 op=LOAD Mar 17 18:52:23.030000 audit[4977]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd679e138 a2=94 a3=2 items=0 ppid=4817 pid=4977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:23.030000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:52:23.030000 audit: BPF prog-id=26 op=UNLOAD Mar 17 18:52:23.286000 audit[4977]: AVC avc: denied { bpf } for pid=4977 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.286000 audit[4977]: AVC avc: denied { bpf } for pid=4977 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.286000 audit[4977]: AVC avc: denied { perfmon } for pid=4977 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.286000 audit[4977]: AVC avc: denied { perfmon } for pid=4977 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.286000 audit[4977]: AVC avc: denied { perfmon } for pid=4977 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.286000 audit[4977]: AVC avc: denied { perfmon } for pid=4977 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.286000 audit[4977]: AVC avc: denied { perfmon } for pid=4977 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.286000 audit[4977]: AVC avc: denied { bpf } for pid=4977 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.286000 audit[4977]: AVC avc: denied { bpf } for pid=4977 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.286000 audit: BPF prog-id=27 op=LOAD Mar 17 18:52:23.286000 audit[4977]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd679e0f8 a2=40 a3=ffffd679e128 items=0 ppid=4817 pid=4977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:23.286000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:52:23.286000 audit: BPF prog-id=27 op=UNLOAD Mar 17 18:52:23.286000 audit[4977]: AVC avc: denied { perfmon } for pid=4977 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.286000 audit[4977]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=0 a1=ffffd679e210 a2=50 a3=0 items=0 ppid=4817 pid=4977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:23.286000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:52:23.309000 audit[4977]: AVC avc: denied { bpf } for pid=4977 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.309000 audit[4977]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffd679e168 a2=28 a3=ffffd679e298 items=0 ppid=4817 pid=4977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:23.309000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:52:23.309000 audit[4977]: AVC avc: denied { bpf } for pid=4977 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.309000 audit[4977]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffd679e198 a2=28 a3=ffffd679e2c8 items=0 ppid=4817 pid=4977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:23.309000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:52:23.309000 audit[4977]: AVC avc: denied { bpf } for pid=4977 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.309000 audit[4977]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffd679e048 a2=28 a3=ffffd679e178 items=0 ppid=4817 pid=4977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:23.309000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:52:23.309000 audit[4977]: AVC avc: denied { bpf } for pid=4977 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.309000 audit[4977]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffd679e1b8 a2=28 a3=ffffd679e2e8 items=0 ppid=4817 pid=4977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:23.309000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:52:23.309000 audit[4977]: AVC avc: denied { bpf } for pid=4977 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.309000 audit[4977]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffd679e198 a2=28 a3=ffffd679e2c8 items=0 ppid=4817 pid=4977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:23.309000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:52:23.309000 audit[4977]: AVC avc: denied { bpf } for pid=4977 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.309000 audit[4977]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffd679e188 a2=28 a3=ffffd679e2b8 items=0 ppid=4817 pid=4977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:23.309000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:52:23.309000 audit[4977]: AVC avc: denied { bpf } for pid=4977 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.309000 audit[4977]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffd679e1b8 a2=28 a3=ffffd679e2e8 items=0 ppid=4817 pid=4977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:23.309000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:52:23.309000 audit[4977]: AVC avc: denied { bpf } for pid=4977 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.309000 audit[4977]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffd679e198 a2=28 a3=ffffd679e2c8 items=0 ppid=4817 pid=4977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:23.309000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:52:23.309000 audit[4977]: AVC avc: denied { bpf } for pid=4977 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.309000 audit[4977]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffd679e1b8 a2=28 a3=ffffd679e2e8 items=0 ppid=4817 pid=4977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:23.309000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:52:23.309000 audit[4977]: AVC avc: denied { bpf } for pid=4977 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.309000 audit[4977]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffd679e188 a2=28 a3=ffffd679e2b8 items=0 ppid=4817 pid=4977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:23.309000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:52:23.309000 audit[4977]: AVC avc: denied { bpf } for pid=4977 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.309000 audit[4977]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffd679e208 a2=28 a3=ffffd679e348 items=0 ppid=4817 pid=4977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:23.309000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:52:23.310000 audit[4977]: AVC avc: denied { perfmon } for pid=4977 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.310000 audit[4977]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffd679df40 a2=50 a3=0 items=0 ppid=4817 pid=4977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:23.310000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:52:23.314000 audit[4977]: AVC avc: denied { bpf } for pid=4977 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.314000 audit[4977]: AVC avc: denied { bpf } for pid=4977 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.314000 audit[4977]: AVC avc: denied { perfmon } for pid=4977 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.314000 audit[4977]: AVC avc: denied { perfmon } for pid=4977 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.314000 audit[4977]: AVC avc: denied { perfmon } for pid=4977 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.314000 audit[4977]: AVC avc: denied { perfmon } for pid=4977 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.314000 audit[4977]: AVC avc: denied { perfmon } for pid=4977 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.314000 audit[4977]: AVC avc: denied { bpf } for pid=4977 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.314000 audit[4977]: AVC avc: denied { bpf } for pid=4977 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.314000 audit: BPF prog-id=28 op=LOAD Mar 17 18:52:23.314000 audit[4977]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd679df48 a2=94 a3=5 items=0 ppid=4817 pid=4977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:23.314000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:52:23.314000 audit: BPF prog-id=28 op=UNLOAD Mar 17 18:52:23.314000 audit[4977]: AVC avc: denied { perfmon } for pid=4977 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.314000 audit[4977]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffd679e050 a2=50 a3=0 items=0 ppid=4817 pid=4977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:23.314000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:52:23.314000 audit[4977]: AVC avc: denied { bpf } for pid=4977 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.314000 audit[4977]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=16 a1=ffffd679e198 a2=4 a3=3 items=0 ppid=4817 pid=4977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:23.314000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:52:23.314000 audit[4977]: AVC avc: denied { bpf } for pid=4977 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.314000 audit[4977]: AVC avc: denied { bpf } for pid=4977 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.314000 audit[4977]: AVC avc: denied { perfmon } for pid=4977 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.314000 audit[4977]: AVC avc: denied { bpf } for pid=4977 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.314000 audit[4977]: AVC avc: denied { perfmon } for pid=4977 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.314000 audit[4977]: AVC avc: denied { perfmon } for pid=4977 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.314000 audit[4977]: AVC avc: denied { perfmon } for pid=4977 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.314000 audit[4977]: AVC avc: denied { perfmon } for pid=4977 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.314000 audit[4977]: AVC avc: denied { perfmon } for pid=4977 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.314000 audit[4977]: AVC avc: denied { bpf } for pid=4977 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.314000 audit[4977]: AVC avc: denied { confidentiality } for pid=4977 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Mar 17 18:52:23.314000 audit[4977]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffd679e178 a2=94 a3=6 items=0 ppid=4817 pid=4977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:23.314000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:52:23.314000 audit[4977]: AVC avc: denied { bpf } for pid=4977 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.314000 audit[4977]: AVC avc: denied { bpf } for pid=4977 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.314000 audit[4977]: AVC avc: denied { perfmon } for pid=4977 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.314000 audit[4977]: AVC avc: denied { bpf } for pid=4977 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.314000 audit[4977]: AVC avc: denied { perfmon } for pid=4977 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.314000 audit[4977]: AVC avc: denied { perfmon } for pid=4977 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.314000 audit[4977]: AVC avc: denied { perfmon } for pid=4977 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.314000 audit[4977]: AVC avc: denied { perfmon } for pid=4977 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.314000 audit[4977]: AVC avc: denied { perfmon } for pid=4977 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.314000 audit[4977]: AVC avc: denied { bpf } for pid=4977 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.314000 audit[4977]: AVC avc: denied { confidentiality } for pid=4977 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Mar 17 18:52:23.314000 audit[4977]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffd679d948 a2=94 a3=83 items=0 ppid=4817 pid=4977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:23.314000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:52:23.315000 audit[4977]: AVC avc: denied { bpf } for pid=4977 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.315000 audit[4977]: AVC avc: denied { bpf } for pid=4977 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.315000 audit[4977]: AVC avc: denied { perfmon } for pid=4977 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.315000 audit[4977]: AVC avc: denied { bpf } for pid=4977 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.315000 audit[4977]: AVC avc: denied { perfmon } for pid=4977 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.315000 audit[4977]: AVC avc: denied { perfmon } for pid=4977 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.315000 audit[4977]: AVC avc: denied { perfmon } for pid=4977 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.315000 audit[4977]: AVC avc: denied { perfmon } for pid=4977 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.315000 audit[4977]: AVC avc: denied { perfmon } for pid=4977 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.315000 audit[4977]: AVC avc: denied { bpf } for pid=4977 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.315000 audit[4977]: AVC avc: denied { confidentiality } for pid=4977 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Mar 17 18:52:23.315000 audit[4977]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffd679d948 a2=94 a3=83 items=0 ppid=4817 pid=4977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:23.315000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:52:23.317000 audit[4977]: AVC avc: denied { bpf } for pid=4977 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.317000 audit[4977]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffd679f388 a2=10 a3=ffffd679f478 items=0 ppid=4817 pid=4977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:23.317000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:52:23.317000 audit[4977]: AVC avc: denied { bpf } for pid=4977 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.317000 audit[4977]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffd679f248 a2=10 a3=ffffd679f338 items=0 ppid=4817 pid=4977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:23.317000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:52:23.317000 audit[4977]: AVC avc: denied { bpf } for pid=4977 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.317000 audit[4977]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffd679f1b8 a2=10 a3=ffffd679f338 items=0 ppid=4817 pid=4977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:23.317000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:52:23.317000 audit[4977]: AVC avc: denied { bpf } for pid=4977 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:52:23.317000 audit[4977]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffd679f1b8 a2=10 a3=ffffd679f338 items=0 ppid=4817 pid=4977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:23.317000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:52:23.326000 audit: BPF prog-id=23 op=UNLOAD Mar 17 18:52:23.618057 systemd-networkd[1734]: calie6a2862f721: Gained IPv6LL Mar 17 18:52:23.717464 kubelet[2763]: I0317 18:52:23.717422 2763 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 18:52:23.735000 audit[5006]: NETFILTER_CFG table=mangle:112 family=2 entries=16 op=nft_register_chain pid=5006 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Mar 17 18:52:23.735000 audit[5006]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffec9c4990 a2=0 a3=ffff80215fa8 items=0 ppid=4817 pid=5006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:23.735000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Mar 17 18:52:23.779000 audit[5009]: NETFILTER_CFG table=filter:113 family=2 entries=215 op=nft_register_chain pid=5009 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Mar 17 18:52:23.779000 audit[5009]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=125772 a0=3 a1=ffffe8dc23f0 a2=0 a3=ffffa094afa8 items=0 ppid=4817 pid=5009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:23.779000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Mar 17 18:52:23.811000 audit[5007]: NETFILTER_CFG table=raw:114 family=2 entries=21 op=nft_register_chain pid=5007 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Mar 17 18:52:23.811000 audit[5007]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=ffffef8839b0 a2=0 a3=ffffa6688fa8 items=0 ppid=4817 pid=5007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:23.811000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Mar 17 18:52:23.814000 audit[5005]: NETFILTER_CFG table=nat:115 family=2 entries=15 op=nft_register_chain pid=5005 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Mar 17 18:52:23.814000 audit[5005]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffceb52110 a2=0 a3=ffffa75b8fa8 items=0 ppid=4817 pid=5005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:23.814000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Mar 17 18:52:23.840000 audit[5015]: NETFILTER_CFG table=filter:116 family=2 entries=10 op=nft_register_rule pid=5015 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:52:23.840000 audit[5015]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3676 a0=3 a1=ffffe839bdc0 a2=0 a3=1 items=0 ppid=2898 pid=5015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:23.840000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:52:23.845000 audit[5015]: NETFILTER_CFG table=nat:117 family=2 entries=20 op=nft_register_rule pid=5015 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:52:23.845000 audit[5015]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffe839bdc0 a2=0 a3=1 items=0 ppid=2898 pid=5015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:23.845000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:52:23.859113 env[1566]: time="2025-03-17T18:52:23.859064717Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:52:23.872127 env[1566]: time="2025-03-17T18:52:23.872010270Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:52:23.878520 env[1566]: time="2025-03-17T18:52:23.878467907Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:52:23.885642 env[1566]: time="2025-03-17T18:52:23.885584411Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:52:23.886384 env[1566]: time="2025-03-17T18:52:23.886324517Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Mar 17 18:52:23.888748 env[1566]: time="2025-03-17T18:52:23.888708231Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Mar 17 18:52:23.909997 env[1566]: time="2025-03-17T18:52:23.909627592Z" level=info msg="CreateContainer within sandbox \"0c026311f0bb1687634f08a563b5b0b259ec01a653e44641369fd636b402e40e\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 17 18:52:23.973236 env[1566]: time="2025-03-17T18:52:23.973176180Z" level=info msg="CreateContainer within sandbox \"0c026311f0bb1687634f08a563b5b0b259ec01a653e44641369fd636b402e40e\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"5b09cf7369d1b40478e8ae3631e59ae69be41e1f9171ab3ce951bbc49e85a076\"" Mar 17 18:52:23.974114 env[1566]: time="2025-03-17T18:52:23.974083682Z" level=info msg="StartContainer for \"5b09cf7369d1b40478e8ae3631e59ae69be41e1f9171ab3ce951bbc49e85a076\"" Mar 17 18:52:24.058141 env[1566]: time="2025-03-17T18:52:24.058087051Z" level=info msg="StartContainer for \"5b09cf7369d1b40478e8ae3631e59ae69be41e1f9171ab3ce951bbc49e85a076\" returns successfully" Mar 17 18:52:24.066058 systemd-networkd[1734]: vxlan.calico: Gained IPv6LL Mar 17 18:52:24.708815 kubelet[2763]: I0317 18:52:24.708754 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7b85fdb584-rpw9l" podStartSLOduration=29.56821788 podStartE2EDuration="32.708738561s" podCreationTimestamp="2025-03-17 18:51:52 +0000 UTC" firstStartedPulling="2025-03-17 18:52:18.303761532 +0000 UTC m=+47.251965648" lastFinishedPulling="2025-03-17 18:52:21.444282133 +0000 UTC m=+50.392486329" observedRunningTime="2025-03-17 18:52:22.747629933 +0000 UTC m=+51.695834049" watchObservedRunningTime="2025-03-17 18:52:24.708738561 +0000 UTC m=+53.656942677" Mar 17 18:52:24.863000 audit[5071]: NETFILTER_CFG table=filter:118 family=2 entries=9 op=nft_register_rule pid=5071 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:52:24.870106 kernel: kauditd_printk_skb: 500 callbacks suppressed Mar 17 18:52:24.870155 kernel: audit: type=1325 audit(1742237544.863:421): table=filter:118 family=2 entries=9 op=nft_register_rule pid=5071 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:52:24.863000 audit[5071]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=fffff3f86b20 a2=0 a3=1 items=0 ppid=2898 pid=5071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:24.863000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:52:24.917179 kubelet[2763]: I0317 18:52:24.917017 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-556dcdff49-tlrdg" podStartSLOduration=25.298823682 podStartE2EDuration="29.916997387s" podCreationTimestamp="2025-03-17 18:51:55 +0000 UTC" firstStartedPulling="2025-03-17 18:52:19.269470867 +0000 UTC m=+48.217674983" lastFinishedPulling="2025-03-17 18:52:23.887644572 +0000 UTC m=+52.835848688" observedRunningTime="2025-03-17 18:52:24.848445962 +0000 UTC m=+53.796650078" watchObservedRunningTime="2025-03-17 18:52:24.916997387 +0000 UTC m=+53.865201503" Mar 17 18:52:24.925891 kernel: audit: type=1300 audit(1742237544.863:421): arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=fffff3f86b20 a2=0 a3=1 items=0 ppid=2898 pid=5071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:24.925986 kernel: audit: type=1327 audit(1742237544.863:421): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:52:24.918000 audit[5071]: NETFILTER_CFG table=nat:119 family=2 entries=27 op=nft_register_chain pid=5071 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:52:24.940351 kernel: audit: type=1325 audit(1742237544.918:422): table=nat:119 family=2 entries=27 op=nft_register_chain pid=5071 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:52:24.918000 audit[5071]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=9348 a0=3 a1=fffff3f86b20 a2=0 a3=1 items=0 ppid=2898 pid=5071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:24.967494 kernel: audit: type=1300 audit(1742237544.918:422): arch=c00000b7 syscall=211 success=yes exit=9348 a0=3 a1=fffff3f86b20 a2=0 a3=1 items=0 ppid=2898 pid=5071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:24.918000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:52:24.982102 kernel: audit: type=1327 audit(1742237544.918:422): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:52:25.560636 env[1566]: time="2025-03-17T18:52:25.560583735Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:52:25.573950 env[1566]: time="2025-03-17T18:52:25.573896286Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:52:25.580060 env[1566]: time="2025-03-17T18:52:25.580016812Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:52:25.587080 env[1566]: time="2025-03-17T18:52:25.587025921Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:52:25.587737 env[1566]: time="2025-03-17T18:52:25.587701108Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Mar 17 18:52:25.590210 env[1566]: time="2025-03-17T18:52:25.590174022Z" level=info msg="CreateContainer within sandbox \"fc7a5eefd0f968ed45c4c6df476448e7be684518fe66501621ac0457db0be61b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 17 18:52:25.646337 env[1566]: time="2025-03-17T18:52:25.646283612Z" level=info msg="CreateContainer within sandbox \"fc7a5eefd0f968ed45c4c6df476448e7be684518fe66501621ac0457db0be61b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"a08db27494955a1f6fa71703db1fd1613576eb9c5a0f4e6bde4e5a8e1c1a977a\"" Mar 17 18:52:25.646837 env[1566]: time="2025-03-17T18:52:25.646788523Z" level=info msg="StartContainer for \"a08db27494955a1f6fa71703db1fd1613576eb9c5a0f4e6bde4e5a8e1c1a977a\"" Mar 17 18:52:25.729449 env[1566]: time="2025-03-17T18:52:25.729373858Z" level=info msg="StartContainer for \"a08db27494955a1f6fa71703db1fd1613576eb9c5a0f4e6bde4e5a8e1c1a977a\" returns successfully" Mar 17 18:52:25.731384 env[1566]: time="2025-03-17T18:52:25.731141305Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Mar 17 18:52:25.915794 systemd[1]: run-containerd-runc-k8s.io-a08db27494955a1f6fa71703db1fd1613576eb9c5a0f4e6bde4e5a8e1c1a977a-runc.6qJRoz.mount: Deactivated successfully. Mar 17 18:52:27.276751 env[1566]: time="2025-03-17T18:52:27.276710277Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:52:27.285769 env[1566]: time="2025-03-17T18:52:27.285719512Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:52:27.291513 env[1566]: time="2025-03-17T18:52:27.291462647Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:52:27.297350 env[1566]: time="2025-03-17T18:52:27.297291460Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:52:27.297859 env[1566]: time="2025-03-17T18:52:27.297828650Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Mar 17 18:52:27.302355 env[1566]: time="2025-03-17T18:52:27.302307448Z" level=info msg="CreateContainer within sandbox \"fc7a5eefd0f968ed45c4c6df476448e7be684518fe66501621ac0457db0be61b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 17 18:52:27.347474 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3449944463.mount: Deactivated successfully. Mar 17 18:52:27.368929 env[1566]: time="2025-03-17T18:52:27.368855707Z" level=info msg="CreateContainer within sandbox \"fc7a5eefd0f968ed45c4c6df476448e7be684518fe66501621ac0457db0be61b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"663782d09c560356dccbc73fad2d86995eedc51b85a238e4d576b328c1c2f224\"" Mar 17 18:52:27.369833 env[1566]: time="2025-03-17T18:52:27.369802649Z" level=info msg="StartContainer for \"663782d09c560356dccbc73fad2d86995eedc51b85a238e4d576b328c1c2f224\"" Mar 17 18:52:27.452434 env[1566]: time="2025-03-17T18:52:27.452385374Z" level=info msg="StartContainer for \"663782d09c560356dccbc73fad2d86995eedc51b85a238e4d576b328c1c2f224\" returns successfully" Mar 17 18:52:27.705229 kubelet[2763]: I0317 18:52:27.705126 2763 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 17 18:52:27.705229 kubelet[2763]: I0317 18:52:27.705164 2763 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 17 18:52:30.349191 systemd[1]: run-containerd-runc-k8s.io-c095753d191fcbffbc8b75aece1c7488e9f3213681409fbc4369c81a4d9ec146-runc.fQffXW.mount: Deactivated successfully. Mar 17 18:52:30.423748 kubelet[2763]: I0317 18:52:30.423688 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-g7tkt" podStartSLOduration=30.967153272 podStartE2EDuration="36.423671726s" podCreationTimestamp="2025-03-17 18:51:54 +0000 UTC" firstStartedPulling="2025-03-17 18:52:21.842552933 +0000 UTC m=+50.790757049" lastFinishedPulling="2025-03-17 18:52:27.299071387 +0000 UTC m=+56.247275503" observedRunningTime="2025-03-17 18:52:27.864899765 +0000 UTC m=+56.813103881" watchObservedRunningTime="2025-03-17 18:52:30.423671726 +0000 UTC m=+59.371875802" Mar 17 18:52:31.375657 env[1566]: time="2025-03-17T18:52:31.375615865Z" level=info msg="StopPodSandbox for \"e0a1c8f1e37e1b7623414e7048fba3842698a32a380236544763d85e6b3a6e6c\"" Mar 17 18:52:31.444992 env[1566]: 2025-03-17 18:52:31.413 [WARNING][5183] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e0a1c8f1e37e1b7623414e7048fba3842698a32a380236544763d85e6b3a6e6c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--a--c36c8d7be6-k8s-coredns--7db6d8ff4d--wlmhz-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c8147012-69db-48aa-96fa-b27553fe56b8", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 51, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-a-c36c8d7be6", ContainerID:"2fdd34f9093c7595045e99a62731d64cc116d576b43a0acac58cc17828880952", Pod:"coredns-7db6d8ff4d-wlmhz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.10.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid25cc6a53dd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:52:31.444992 env[1566]: 2025-03-17 18:52:31.414 [INFO][5183] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e0a1c8f1e37e1b7623414e7048fba3842698a32a380236544763d85e6b3a6e6c" Mar 17 18:52:31.444992 env[1566]: 2025-03-17 18:52:31.414 [INFO][5183] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e0a1c8f1e37e1b7623414e7048fba3842698a32a380236544763d85e6b3a6e6c" iface="eth0" netns="" Mar 17 18:52:31.444992 env[1566]: 2025-03-17 18:52:31.414 [INFO][5183] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e0a1c8f1e37e1b7623414e7048fba3842698a32a380236544763d85e6b3a6e6c" Mar 17 18:52:31.444992 env[1566]: 2025-03-17 18:52:31.414 [INFO][5183] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e0a1c8f1e37e1b7623414e7048fba3842698a32a380236544763d85e6b3a6e6c" Mar 17 18:52:31.444992 env[1566]: 2025-03-17 18:52:31.432 [INFO][5190] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e0a1c8f1e37e1b7623414e7048fba3842698a32a380236544763d85e6b3a6e6c" HandleID="k8s-pod-network.e0a1c8f1e37e1b7623414e7048fba3842698a32a380236544763d85e6b3a6e6c" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-coredns--7db6d8ff4d--wlmhz-eth0" Mar 17 18:52:31.444992 env[1566]: 2025-03-17 18:52:31.432 [INFO][5190] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:52:31.444992 env[1566]: 2025-03-17 18:52:31.432 [INFO][5190] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:52:31.444992 env[1566]: 2025-03-17 18:52:31.441 [WARNING][5190] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e0a1c8f1e37e1b7623414e7048fba3842698a32a380236544763d85e6b3a6e6c" HandleID="k8s-pod-network.e0a1c8f1e37e1b7623414e7048fba3842698a32a380236544763d85e6b3a6e6c" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-coredns--7db6d8ff4d--wlmhz-eth0" Mar 17 18:52:31.444992 env[1566]: 2025-03-17 18:52:31.441 [INFO][5190] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e0a1c8f1e37e1b7623414e7048fba3842698a32a380236544763d85e6b3a6e6c" HandleID="k8s-pod-network.e0a1c8f1e37e1b7623414e7048fba3842698a32a380236544763d85e6b3a6e6c" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-coredns--7db6d8ff4d--wlmhz-eth0" Mar 17 18:52:31.444992 env[1566]: 2025-03-17 18:52:31.442 [INFO][5190] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:52:31.444992 env[1566]: 2025-03-17 18:52:31.443 [INFO][5183] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e0a1c8f1e37e1b7623414e7048fba3842698a32a380236544763d85e6b3a6e6c" Mar 17 18:52:31.445571 env[1566]: time="2025-03-17T18:52:31.445526107Z" level=info msg="TearDown network for sandbox \"e0a1c8f1e37e1b7623414e7048fba3842698a32a380236544763d85e6b3a6e6c\" successfully" Mar 17 18:52:31.445648 env[1566]: time="2025-03-17T18:52:31.445632585Z" level=info msg="StopPodSandbox for \"e0a1c8f1e37e1b7623414e7048fba3842698a32a380236544763d85e6b3a6e6c\" returns successfully" Mar 17 18:52:31.446288 env[1566]: time="2025-03-17T18:52:31.446258014Z" level=info msg="RemovePodSandbox for \"e0a1c8f1e37e1b7623414e7048fba3842698a32a380236544763d85e6b3a6e6c\"" Mar 17 18:52:31.446379 env[1566]: time="2025-03-17T18:52:31.446301214Z" level=info msg="Forcibly stopping sandbox \"e0a1c8f1e37e1b7623414e7048fba3842698a32a380236544763d85e6b3a6e6c\"" Mar 17 18:52:31.516566 env[1566]: 2025-03-17 18:52:31.484 [WARNING][5208] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e0a1c8f1e37e1b7623414e7048fba3842698a32a380236544763d85e6b3a6e6c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--a--c36c8d7be6-k8s-coredns--7db6d8ff4d--wlmhz-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c8147012-69db-48aa-96fa-b27553fe56b8", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 51, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-a-c36c8d7be6", ContainerID:"2fdd34f9093c7595045e99a62731d64cc116d576b43a0acac58cc17828880952", Pod:"coredns-7db6d8ff4d-wlmhz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.10.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid25cc6a53dd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:52:31.516566 env[1566]: 2025-03-17 18:52:31.484 [INFO][5208] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e0a1c8f1e37e1b7623414e7048fba3842698a32a380236544763d85e6b3a6e6c" Mar 17 18:52:31.516566 env[1566]: 2025-03-17 18:52:31.484 [INFO][5208] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e0a1c8f1e37e1b7623414e7048fba3842698a32a380236544763d85e6b3a6e6c" iface="eth0" netns="" Mar 17 18:52:31.516566 env[1566]: 2025-03-17 18:52:31.484 [INFO][5208] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e0a1c8f1e37e1b7623414e7048fba3842698a32a380236544763d85e6b3a6e6c" Mar 17 18:52:31.516566 env[1566]: 2025-03-17 18:52:31.484 [INFO][5208] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e0a1c8f1e37e1b7623414e7048fba3842698a32a380236544763d85e6b3a6e6c" Mar 17 18:52:31.516566 env[1566]: 2025-03-17 18:52:31.502 [INFO][5215] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e0a1c8f1e37e1b7623414e7048fba3842698a32a380236544763d85e6b3a6e6c" HandleID="k8s-pod-network.e0a1c8f1e37e1b7623414e7048fba3842698a32a380236544763d85e6b3a6e6c" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-coredns--7db6d8ff4d--wlmhz-eth0" Mar 17 18:52:31.516566 env[1566]: 2025-03-17 18:52:31.502 [INFO][5215] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:52:31.516566 env[1566]: 2025-03-17 18:52:31.503 [INFO][5215] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:52:31.516566 env[1566]: 2025-03-17 18:52:31.511 [WARNING][5215] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e0a1c8f1e37e1b7623414e7048fba3842698a32a380236544763d85e6b3a6e6c" HandleID="k8s-pod-network.e0a1c8f1e37e1b7623414e7048fba3842698a32a380236544763d85e6b3a6e6c" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-coredns--7db6d8ff4d--wlmhz-eth0" Mar 17 18:52:31.516566 env[1566]: 2025-03-17 18:52:31.511 [INFO][5215] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e0a1c8f1e37e1b7623414e7048fba3842698a32a380236544763d85e6b3a6e6c" HandleID="k8s-pod-network.e0a1c8f1e37e1b7623414e7048fba3842698a32a380236544763d85e6b3a6e6c" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-coredns--7db6d8ff4d--wlmhz-eth0" Mar 17 18:52:31.516566 env[1566]: 2025-03-17 18:52:31.513 [INFO][5215] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:52:31.516566 env[1566]: 2025-03-17 18:52:31.515 [INFO][5208] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e0a1c8f1e37e1b7623414e7048fba3842698a32a380236544763d85e6b3a6e6c" Mar 17 18:52:31.517064 env[1566]: time="2025-03-17T18:52:31.516610409Z" level=info msg="TearDown network for sandbox \"e0a1c8f1e37e1b7623414e7048fba3842698a32a380236544763d85e6b3a6e6c\" successfully" Mar 17 18:52:31.530000 env[1566]: time="2025-03-17T18:52:31.529945533Z" level=info msg="RemovePodSandbox \"e0a1c8f1e37e1b7623414e7048fba3842698a32a380236544763d85e6b3a6e6c\" returns successfully" Mar 17 18:52:31.530756 env[1566]: time="2025-03-17T18:52:31.530714999Z" level=info msg="StopPodSandbox for \"c952186de7390f865d8d07eb798aac8ce4b67164b86f2f0bc0e8c4d4b23251fa\"" Mar 17 18:52:31.610703 env[1566]: 2025-03-17 18:52:31.575 [WARNING][5234] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c952186de7390f865d8d07eb798aac8ce4b67164b86f2f0bc0e8c4d4b23251fa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--a--c36c8d7be6-k8s-csi--node--driver--g7tkt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b729951e-7fde-40b6-a0f1-675d7c66febf", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 51, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-a-c36c8d7be6", ContainerID:"fc7a5eefd0f968ed45c4c6df476448e7be684518fe66501621ac0457db0be61b", Pod:"csi-node-driver-g7tkt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.10.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliee17ef2878d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:52:31.610703 env[1566]: 2025-03-17 18:52:31.575 [INFO][5234] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c952186de7390f865d8d07eb798aac8ce4b67164b86f2f0bc0e8c4d4b23251fa" Mar 17 18:52:31.610703 env[1566]: 2025-03-17 18:52:31.575 [INFO][5234] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c952186de7390f865d8d07eb798aac8ce4b67164b86f2f0bc0e8c4d4b23251fa" iface="eth0" netns="" Mar 17 18:52:31.610703 env[1566]: 2025-03-17 18:52:31.575 [INFO][5234] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c952186de7390f865d8d07eb798aac8ce4b67164b86f2f0bc0e8c4d4b23251fa" Mar 17 18:52:31.610703 env[1566]: 2025-03-17 18:52:31.576 [INFO][5234] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c952186de7390f865d8d07eb798aac8ce4b67164b86f2f0bc0e8c4d4b23251fa" Mar 17 18:52:31.610703 env[1566]: 2025-03-17 18:52:31.594 [INFO][5240] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c952186de7390f865d8d07eb798aac8ce4b67164b86f2f0bc0e8c4d4b23251fa" HandleID="k8s-pod-network.c952186de7390f865d8d07eb798aac8ce4b67164b86f2f0bc0e8c4d4b23251fa" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-csi--node--driver--g7tkt-eth0" Mar 17 18:52:31.610703 env[1566]: 2025-03-17 18:52:31.595 [INFO][5240] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:52:31.610703 env[1566]: 2025-03-17 18:52:31.595 [INFO][5240] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:52:31.610703 env[1566]: 2025-03-17 18:52:31.607 [WARNING][5240] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c952186de7390f865d8d07eb798aac8ce4b67164b86f2f0bc0e8c4d4b23251fa" HandleID="k8s-pod-network.c952186de7390f865d8d07eb798aac8ce4b67164b86f2f0bc0e8c4d4b23251fa" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-csi--node--driver--g7tkt-eth0" Mar 17 18:52:31.610703 env[1566]: 2025-03-17 18:52:31.607 [INFO][5240] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c952186de7390f865d8d07eb798aac8ce4b67164b86f2f0bc0e8c4d4b23251fa" HandleID="k8s-pod-network.c952186de7390f865d8d07eb798aac8ce4b67164b86f2f0bc0e8c4d4b23251fa" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-csi--node--driver--g7tkt-eth0" Mar 17 18:52:31.610703 env[1566]: 2025-03-17 18:52:31.608 [INFO][5240] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:52:31.610703 env[1566]: 2025-03-17 18:52:31.609 [INFO][5234] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c952186de7390f865d8d07eb798aac8ce4b67164b86f2f0bc0e8c4d4b23251fa" Mar 17 18:52:31.611259 env[1566]: time="2025-03-17T18:52:31.610742622Z" level=info msg="TearDown network for sandbox \"c952186de7390f865d8d07eb798aac8ce4b67164b86f2f0bc0e8c4d4b23251fa\" successfully" Mar 17 18:52:31.611259 env[1566]: time="2025-03-17T18:52:31.610773342Z" level=info msg="StopPodSandbox for \"c952186de7390f865d8d07eb798aac8ce4b67164b86f2f0bc0e8c4d4b23251fa\" returns successfully" Mar 17 18:52:31.611311 env[1566]: time="2025-03-17T18:52:31.611260693Z" level=info msg="RemovePodSandbox for \"c952186de7390f865d8d07eb798aac8ce4b67164b86f2f0bc0e8c4d4b23251fa\"" Mar 17 18:52:31.611335 env[1566]: time="2025-03-17T18:52:31.611290813Z" level=info msg="Forcibly stopping sandbox \"c952186de7390f865d8d07eb798aac8ce4b67164b86f2f0bc0e8c4d4b23251fa\"" Mar 17 18:52:31.711556 env[1566]: 2025-03-17 18:52:31.653 [WARNING][5258] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c952186de7390f865d8d07eb798aac8ce4b67164b86f2f0bc0e8c4d4b23251fa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--a--c36c8d7be6-k8s-csi--node--driver--g7tkt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b729951e-7fde-40b6-a0f1-675d7c66febf", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 51, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-a-c36c8d7be6", ContainerID:"fc7a5eefd0f968ed45c4c6df476448e7be684518fe66501621ac0457db0be61b", Pod:"csi-node-driver-g7tkt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.10.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliee17ef2878d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:52:31.711556 env[1566]: 2025-03-17 18:52:31.653 [INFO][5258] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c952186de7390f865d8d07eb798aac8ce4b67164b86f2f0bc0e8c4d4b23251fa" Mar 17 18:52:31.711556 env[1566]: 2025-03-17 18:52:31.653 [INFO][5258] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c952186de7390f865d8d07eb798aac8ce4b67164b86f2f0bc0e8c4d4b23251fa" iface="eth0" netns="" Mar 17 18:52:31.711556 env[1566]: 2025-03-17 18:52:31.653 [INFO][5258] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c952186de7390f865d8d07eb798aac8ce4b67164b86f2f0bc0e8c4d4b23251fa" Mar 17 18:52:31.711556 env[1566]: 2025-03-17 18:52:31.653 [INFO][5258] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c952186de7390f865d8d07eb798aac8ce4b67164b86f2f0bc0e8c4d4b23251fa" Mar 17 18:52:31.711556 env[1566]: 2025-03-17 18:52:31.694 [INFO][5264] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c952186de7390f865d8d07eb798aac8ce4b67164b86f2f0bc0e8c4d4b23251fa" HandleID="k8s-pod-network.c952186de7390f865d8d07eb798aac8ce4b67164b86f2f0bc0e8c4d4b23251fa" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-csi--node--driver--g7tkt-eth0" Mar 17 18:52:31.711556 env[1566]: 2025-03-17 18:52:31.695 [INFO][5264] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:52:31.711556 env[1566]: 2025-03-17 18:52:31.695 [INFO][5264] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:52:31.711556 env[1566]: 2025-03-17 18:52:31.707 [WARNING][5264] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c952186de7390f865d8d07eb798aac8ce4b67164b86f2f0bc0e8c4d4b23251fa" HandleID="k8s-pod-network.c952186de7390f865d8d07eb798aac8ce4b67164b86f2f0bc0e8c4d4b23251fa" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-csi--node--driver--g7tkt-eth0" Mar 17 18:52:31.711556 env[1566]: 2025-03-17 18:52:31.707 [INFO][5264] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c952186de7390f865d8d07eb798aac8ce4b67164b86f2f0bc0e8c4d4b23251fa" HandleID="k8s-pod-network.c952186de7390f865d8d07eb798aac8ce4b67164b86f2f0bc0e8c4d4b23251fa" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-csi--node--driver--g7tkt-eth0" Mar 17 18:52:31.711556 env[1566]: 2025-03-17 18:52:31.709 [INFO][5264] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:52:31.711556 env[1566]: 2025-03-17 18:52:31.710 [INFO][5258] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c952186de7390f865d8d07eb798aac8ce4b67164b86f2f0bc0e8c4d4b23251fa" Mar 17 18:52:31.712145 env[1566]: time="2025-03-17T18:52:31.712100868Z" level=info msg="TearDown network for sandbox \"c952186de7390f865d8d07eb798aac8ce4b67164b86f2f0bc0e8c4d4b23251fa\" successfully" Mar 17 18:52:31.722121 env[1566]: time="2025-03-17T18:52:31.722064572Z" level=info msg="RemovePodSandbox \"c952186de7390f865d8d07eb798aac8ce4b67164b86f2f0bc0e8c4d4b23251fa\" returns successfully" Mar 17 18:52:31.722819 env[1566]: time="2025-03-17T18:52:31.722796839Z" level=info msg="StopPodSandbox for \"b96d7dd36c70dfdd23967d114fa64e7431ab0f79cc1035c1d755e3d694374f0a\"" Mar 17 18:52:31.791585 env[1566]: 2025-03-17 18:52:31.759 [WARNING][5282] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b96d7dd36c70dfdd23967d114fa64e7431ab0f79cc1035c1d755e3d694374f0a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--a--c36c8d7be6-k8s-calico--kube--controllers--556dcdff49--tlrdg-eth0", GenerateName:"calico-kube-controllers-556dcdff49-", Namespace:"calico-system", SelfLink:"", UID:"65d68753-4b96-44f6-80b9-42dfef957a45", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 51, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"556dcdff49", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-a-c36c8d7be6", ContainerID:"0c026311f0bb1687634f08a563b5b0b259ec01a653e44641369fd636b402e40e", Pod:"calico-kube-controllers-556dcdff49-tlrdg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.10.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif4651768243", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:52:31.791585 env[1566]: 2025-03-17 18:52:31.759 [INFO][5282] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b96d7dd36c70dfdd23967d114fa64e7431ab0f79cc1035c1d755e3d694374f0a" Mar 17 18:52:31.791585 env[1566]: 2025-03-17 18:52:31.759 [INFO][5282] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b96d7dd36c70dfdd23967d114fa64e7431ab0f79cc1035c1d755e3d694374f0a" iface="eth0" netns="" Mar 17 18:52:31.791585 env[1566]: 2025-03-17 18:52:31.759 [INFO][5282] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b96d7dd36c70dfdd23967d114fa64e7431ab0f79cc1035c1d755e3d694374f0a" Mar 17 18:52:31.791585 env[1566]: 2025-03-17 18:52:31.759 [INFO][5282] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b96d7dd36c70dfdd23967d114fa64e7431ab0f79cc1035c1d755e3d694374f0a" Mar 17 18:52:31.791585 env[1566]: 2025-03-17 18:52:31.778 [INFO][5289] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b96d7dd36c70dfdd23967d114fa64e7431ab0f79cc1035c1d755e3d694374f0a" HandleID="k8s-pod-network.b96d7dd36c70dfdd23967d114fa64e7431ab0f79cc1035c1d755e3d694374f0a" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-calico--kube--controllers--556dcdff49--tlrdg-eth0" Mar 17 18:52:31.791585 env[1566]: 2025-03-17 18:52:31.778 [INFO][5289] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:52:31.791585 env[1566]: 2025-03-17 18:52:31.778 [INFO][5289] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:52:31.791585 env[1566]: 2025-03-17 18:52:31.787 [WARNING][5289] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b96d7dd36c70dfdd23967d114fa64e7431ab0f79cc1035c1d755e3d694374f0a" HandleID="k8s-pod-network.b96d7dd36c70dfdd23967d114fa64e7431ab0f79cc1035c1d755e3d694374f0a" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-calico--kube--controllers--556dcdff49--tlrdg-eth0" Mar 17 18:52:31.791585 env[1566]: 2025-03-17 18:52:31.787 [INFO][5289] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b96d7dd36c70dfdd23967d114fa64e7431ab0f79cc1035c1d755e3d694374f0a" HandleID="k8s-pod-network.b96d7dd36c70dfdd23967d114fa64e7431ab0f79cc1035c1d755e3d694374f0a" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-calico--kube--controllers--556dcdff49--tlrdg-eth0" Mar 17 18:52:31.791585 env[1566]: 2025-03-17 18:52:31.789 [INFO][5289] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:52:31.791585 env[1566]: 2025-03-17 18:52:31.790 [INFO][5282] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b96d7dd36c70dfdd23967d114fa64e7431ab0f79cc1035c1d755e3d694374f0a" Mar 17 18:52:31.792129 env[1566]: time="2025-03-17T18:52:31.792095732Z" level=info msg="TearDown network for sandbox \"b96d7dd36c70dfdd23967d114fa64e7431ab0f79cc1035c1d755e3d694374f0a\" successfully" Mar 17 18:52:31.792199 env[1566]: time="2025-03-17T18:52:31.792183410Z" level=info msg="StopPodSandbox for \"b96d7dd36c70dfdd23967d114fa64e7431ab0f79cc1035c1d755e3d694374f0a\" returns successfully" Mar 17 18:52:31.792765 env[1566]: time="2025-03-17T18:52:31.792741840Z" level=info msg="RemovePodSandbox for \"b96d7dd36c70dfdd23967d114fa64e7431ab0f79cc1035c1d755e3d694374f0a\"" Mar 17 18:52:31.793028 env[1566]: time="2025-03-17T18:52:31.792987276Z" level=info msg="Forcibly stopping sandbox \"b96d7dd36c70dfdd23967d114fa64e7431ab0f79cc1035c1d755e3d694374f0a\"" Mar 17 18:52:31.884082 env[1566]: 2025-03-17 18:52:31.832 [WARNING][5308] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b96d7dd36c70dfdd23967d114fa64e7431ab0f79cc1035c1d755e3d694374f0a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--a--c36c8d7be6-k8s-calico--kube--controllers--556dcdff49--tlrdg-eth0", GenerateName:"calico-kube-controllers-556dcdff49-", Namespace:"calico-system", SelfLink:"", UID:"65d68753-4b96-44f6-80b9-42dfef957a45", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 51, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"556dcdff49", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-a-c36c8d7be6", ContainerID:"0c026311f0bb1687634f08a563b5b0b259ec01a653e44641369fd636b402e40e", Pod:"calico-kube-controllers-556dcdff49-tlrdg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.10.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif4651768243", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:52:31.884082 env[1566]: 2025-03-17 18:52:31.832 [INFO][5308] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b96d7dd36c70dfdd23967d114fa64e7431ab0f79cc1035c1d755e3d694374f0a" Mar 17 18:52:31.884082 env[1566]: 2025-03-17 18:52:31.832 [INFO][5308] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b96d7dd36c70dfdd23967d114fa64e7431ab0f79cc1035c1d755e3d694374f0a" iface="eth0" netns="" Mar 17 18:52:31.884082 env[1566]: 2025-03-17 18:52:31.832 [INFO][5308] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b96d7dd36c70dfdd23967d114fa64e7431ab0f79cc1035c1d755e3d694374f0a" Mar 17 18:52:31.884082 env[1566]: 2025-03-17 18:52:31.832 [INFO][5308] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b96d7dd36c70dfdd23967d114fa64e7431ab0f79cc1035c1d755e3d694374f0a" Mar 17 18:52:31.884082 env[1566]: 2025-03-17 18:52:31.871 [INFO][5314] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b96d7dd36c70dfdd23967d114fa64e7431ab0f79cc1035c1d755e3d694374f0a" HandleID="k8s-pod-network.b96d7dd36c70dfdd23967d114fa64e7431ab0f79cc1035c1d755e3d694374f0a" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-calico--kube--controllers--556dcdff49--tlrdg-eth0" Mar 17 18:52:31.884082 env[1566]: 2025-03-17 18:52:31.871 [INFO][5314] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:52:31.884082 env[1566]: 2025-03-17 18:52:31.871 [INFO][5314] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:52:31.884082 env[1566]: 2025-03-17 18:52:31.879 [WARNING][5314] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b96d7dd36c70dfdd23967d114fa64e7431ab0f79cc1035c1d755e3d694374f0a" HandleID="k8s-pod-network.b96d7dd36c70dfdd23967d114fa64e7431ab0f79cc1035c1d755e3d694374f0a" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-calico--kube--controllers--556dcdff49--tlrdg-eth0" Mar 17 18:52:31.884082 env[1566]: 2025-03-17 18:52:31.879 [INFO][5314] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b96d7dd36c70dfdd23967d114fa64e7431ab0f79cc1035c1d755e3d694374f0a" HandleID="k8s-pod-network.b96d7dd36c70dfdd23967d114fa64e7431ab0f79cc1035c1d755e3d694374f0a" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-calico--kube--controllers--556dcdff49--tlrdg-eth0" Mar 17 18:52:31.884082 env[1566]: 2025-03-17 18:52:31.881 [INFO][5314] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:52:31.884082 env[1566]: 2025-03-17 18:52:31.882 [INFO][5308] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b96d7dd36c70dfdd23967d114fa64e7431ab0f79cc1035c1d755e3d694374f0a" Mar 17 18:52:31.884590 env[1566]: time="2025-03-17T18:52:31.884555615Z" level=info msg="TearDown network for sandbox \"b96d7dd36c70dfdd23967d114fa64e7431ab0f79cc1035c1d755e3d694374f0a\" successfully" Mar 17 18:52:31.895320 env[1566]: time="2025-03-17T18:52:31.895258425Z" level=info msg="RemovePodSandbox \"b96d7dd36c70dfdd23967d114fa64e7431ab0f79cc1035c1d755e3d694374f0a\" returns successfully" Mar 17 18:52:31.896016 env[1566]: time="2025-03-17T18:52:31.895961733Z" level=info msg="StopPodSandbox for \"faca32c2f27d05cb4b0fa73fb116cda5009efcfdafd08b20b4d6363289d31b68\"" Mar 17 18:52:31.974910 env[1566]: 2025-03-17 18:52:31.940 [WARNING][5333] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="faca32c2f27d05cb4b0fa73fb116cda5009efcfdafd08b20b4d6363289d31b68" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--a--c36c8d7be6-k8s-calico--apiserver--7b85fdb584--4542j-eth0", GenerateName:"calico-apiserver-7b85fdb584-", Namespace:"calico-apiserver", SelfLink:"", UID:"3da68d2e-dbd2-4641-b6b6-c1e92b514d10", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 51, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b85fdb584", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-a-c36c8d7be6", ContainerID:"646de5560a67ba278bca88b6c2c2fbda33965bab849cdbe6fea518258d9ee71d", Pod:"calico-apiserver-7b85fdb584-4542j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.10.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie6a2862f721", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:52:31.974910 env[1566]: 2025-03-17 18:52:31.940 [INFO][5333] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="faca32c2f27d05cb4b0fa73fb116cda5009efcfdafd08b20b4d6363289d31b68" Mar 17 18:52:31.974910 env[1566]: 2025-03-17 18:52:31.940 [INFO][5333] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="faca32c2f27d05cb4b0fa73fb116cda5009efcfdafd08b20b4d6363289d31b68" iface="eth0" netns="" Mar 17 18:52:31.974910 env[1566]: 2025-03-17 18:52:31.940 [INFO][5333] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="faca32c2f27d05cb4b0fa73fb116cda5009efcfdafd08b20b4d6363289d31b68" Mar 17 18:52:31.974910 env[1566]: 2025-03-17 18:52:31.940 [INFO][5333] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="faca32c2f27d05cb4b0fa73fb116cda5009efcfdafd08b20b4d6363289d31b68" Mar 17 18:52:31.974910 env[1566]: 2025-03-17 18:52:31.960 [INFO][5340] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="faca32c2f27d05cb4b0fa73fb116cda5009efcfdafd08b20b4d6363289d31b68" HandleID="k8s-pod-network.faca32c2f27d05cb4b0fa73fb116cda5009efcfdafd08b20b4d6363289d31b68" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-calico--apiserver--7b85fdb584--4542j-eth0" Mar 17 18:52:31.974910 env[1566]: 2025-03-17 18:52:31.960 [INFO][5340] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:52:31.974910 env[1566]: 2025-03-17 18:52:31.960 [INFO][5340] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:52:31.974910 env[1566]: 2025-03-17 18:52:31.970 [WARNING][5340] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="faca32c2f27d05cb4b0fa73fb116cda5009efcfdafd08b20b4d6363289d31b68" HandleID="k8s-pod-network.faca32c2f27d05cb4b0fa73fb116cda5009efcfdafd08b20b4d6363289d31b68" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-calico--apiserver--7b85fdb584--4542j-eth0" Mar 17 18:52:31.974910 env[1566]: 2025-03-17 18:52:31.970 [INFO][5340] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="faca32c2f27d05cb4b0fa73fb116cda5009efcfdafd08b20b4d6363289d31b68" HandleID="k8s-pod-network.faca32c2f27d05cb4b0fa73fb116cda5009efcfdafd08b20b4d6363289d31b68" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-calico--apiserver--7b85fdb584--4542j-eth0" Mar 17 18:52:31.974910 env[1566]: 2025-03-17 18:52:31.971 [INFO][5340] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:52:31.974910 env[1566]: 2025-03-17 18:52:31.973 [INFO][5333] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="faca32c2f27d05cb4b0fa73fb116cda5009efcfdafd08b20b4d6363289d31b68" Mar 17 18:52:31.975428 env[1566]: time="2025-03-17T18:52:31.975395007Z" level=info msg="TearDown network for sandbox \"faca32c2f27d05cb4b0fa73fb116cda5009efcfdafd08b20b4d6363289d31b68\" successfully" Mar 17 18:52:31.975495 env[1566]: time="2025-03-17T18:52:31.975479925Z" level=info msg="StopPodSandbox for \"faca32c2f27d05cb4b0fa73fb116cda5009efcfdafd08b20b4d6363289d31b68\" returns successfully" Mar 17 18:52:31.976145 env[1566]: time="2025-03-17T18:52:31.976110074Z" level=info msg="RemovePodSandbox for \"faca32c2f27d05cb4b0fa73fb116cda5009efcfdafd08b20b4d6363289d31b68\"" Mar 17 18:52:31.976314 env[1566]: time="2025-03-17T18:52:31.976273671Z" level=info msg="Forcibly stopping sandbox \"faca32c2f27d05cb4b0fa73fb116cda5009efcfdafd08b20b4d6363289d31b68\"" Mar 17 18:52:32.046488 env[1566]: 2025-03-17 18:52:32.013 [WARNING][5359] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="faca32c2f27d05cb4b0fa73fb116cda5009efcfdafd08b20b4d6363289d31b68" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--a--c36c8d7be6-k8s-calico--apiserver--7b85fdb584--4542j-eth0", GenerateName:"calico-apiserver-7b85fdb584-", Namespace:"calico-apiserver", SelfLink:"", UID:"3da68d2e-dbd2-4641-b6b6-c1e92b514d10", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 51, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b85fdb584", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-a-c36c8d7be6", ContainerID:"646de5560a67ba278bca88b6c2c2fbda33965bab849cdbe6fea518258d9ee71d", Pod:"calico-apiserver-7b85fdb584-4542j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.10.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie6a2862f721", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:52:32.046488 env[1566]: 2025-03-17 18:52:32.013 [INFO][5359] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="faca32c2f27d05cb4b0fa73fb116cda5009efcfdafd08b20b4d6363289d31b68" Mar 17 18:52:32.046488 env[1566]: 2025-03-17 18:52:32.013 [INFO][5359] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="faca32c2f27d05cb4b0fa73fb116cda5009efcfdafd08b20b4d6363289d31b68" iface="eth0" netns="" Mar 17 18:52:32.046488 env[1566]: 2025-03-17 18:52:32.013 [INFO][5359] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="faca32c2f27d05cb4b0fa73fb116cda5009efcfdafd08b20b4d6363289d31b68" Mar 17 18:52:32.046488 env[1566]: 2025-03-17 18:52:32.013 [INFO][5359] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="faca32c2f27d05cb4b0fa73fb116cda5009efcfdafd08b20b4d6363289d31b68" Mar 17 18:52:32.046488 env[1566]: 2025-03-17 18:52:32.033 [INFO][5365] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="faca32c2f27d05cb4b0fa73fb116cda5009efcfdafd08b20b4d6363289d31b68" HandleID="k8s-pod-network.faca32c2f27d05cb4b0fa73fb116cda5009efcfdafd08b20b4d6363289d31b68" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-calico--apiserver--7b85fdb584--4542j-eth0" Mar 17 18:52:32.046488 env[1566]: 2025-03-17 18:52:32.033 [INFO][5365] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:52:32.046488 env[1566]: 2025-03-17 18:52:32.033 [INFO][5365] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:52:32.046488 env[1566]: 2025-03-17 18:52:32.042 [WARNING][5365] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="faca32c2f27d05cb4b0fa73fb116cda5009efcfdafd08b20b4d6363289d31b68" HandleID="k8s-pod-network.faca32c2f27d05cb4b0fa73fb116cda5009efcfdafd08b20b4d6363289d31b68" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-calico--apiserver--7b85fdb584--4542j-eth0" Mar 17 18:52:32.046488 env[1566]: 2025-03-17 18:52:32.042 [INFO][5365] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="faca32c2f27d05cb4b0fa73fb116cda5009efcfdafd08b20b4d6363289d31b68" HandleID="k8s-pod-network.faca32c2f27d05cb4b0fa73fb116cda5009efcfdafd08b20b4d6363289d31b68" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-calico--apiserver--7b85fdb584--4542j-eth0" Mar 17 18:52:32.046488 env[1566]: 2025-03-17 18:52:32.043 [INFO][5365] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:52:32.046488 env[1566]: 2025-03-17 18:52:32.045 [INFO][5359] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="faca32c2f27d05cb4b0fa73fb116cda5009efcfdafd08b20b4d6363289d31b68" Mar 17 18:52:32.047030 env[1566]: time="2025-03-17T18:52:32.046995466Z" level=info msg="TearDown network for sandbox \"faca32c2f27d05cb4b0fa73fb116cda5009efcfdafd08b20b4d6363289d31b68\" successfully" Mar 17 18:52:32.058306 env[1566]: time="2025-03-17T18:52:32.058264428Z" level=info msg="RemovePodSandbox \"faca32c2f27d05cb4b0fa73fb116cda5009efcfdafd08b20b4d6363289d31b68\" returns successfully" Mar 17 18:52:32.058980 env[1566]: time="2025-03-17T18:52:32.058956736Z" level=info msg="StopPodSandbox for \"33e15b3c35dd3294ddfb7b5348fe3b3ff1444839c738b7c49c8236f0cae39e6a\"" Mar 17 18:52:32.140227 env[1566]: 2025-03-17 18:52:32.102 [WARNING][5384] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="33e15b3c35dd3294ddfb7b5348fe3b3ff1444839c738b7c49c8236f0cae39e6a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--a--c36c8d7be6-k8s-calico--apiserver--7b85fdb584--rpw9l-eth0", GenerateName:"calico-apiserver-7b85fdb584-", Namespace:"calico-apiserver", SelfLink:"", UID:"7edfd8b5-e741-4ba4-b5a0-38f1929fe58b", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 51, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b85fdb584", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-a-c36c8d7be6", ContainerID:"58da4f6e5fc27c8cee99933b7e9aaf356eed0af0485adfed2473be01b66d9934", Pod:"calico-apiserver-7b85fdb584-rpw9l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.10.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic078a3256b2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:52:32.140227 env[1566]: 2025-03-17 18:52:32.102 [INFO][5384] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="33e15b3c35dd3294ddfb7b5348fe3b3ff1444839c738b7c49c8236f0cae39e6a" Mar 17 18:52:32.140227 env[1566]: 2025-03-17 18:52:32.102 [INFO][5384] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="33e15b3c35dd3294ddfb7b5348fe3b3ff1444839c738b7c49c8236f0cae39e6a" iface="eth0" netns="" Mar 17 18:52:32.140227 env[1566]: 2025-03-17 18:52:32.102 [INFO][5384] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="33e15b3c35dd3294ddfb7b5348fe3b3ff1444839c738b7c49c8236f0cae39e6a" Mar 17 18:52:32.140227 env[1566]: 2025-03-17 18:52:32.102 [INFO][5384] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="33e15b3c35dd3294ddfb7b5348fe3b3ff1444839c738b7c49c8236f0cae39e6a" Mar 17 18:52:32.140227 env[1566]: 2025-03-17 18:52:32.123 [INFO][5391] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="33e15b3c35dd3294ddfb7b5348fe3b3ff1444839c738b7c49c8236f0cae39e6a" HandleID="k8s-pod-network.33e15b3c35dd3294ddfb7b5348fe3b3ff1444839c738b7c49c8236f0cae39e6a" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-calico--apiserver--7b85fdb584--rpw9l-eth0" Mar 17 18:52:32.140227 env[1566]: 2025-03-17 18:52:32.123 [INFO][5391] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:52:32.140227 env[1566]: 2025-03-17 18:52:32.123 [INFO][5391] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:52:32.140227 env[1566]: 2025-03-17 18:52:32.131 [WARNING][5391] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="33e15b3c35dd3294ddfb7b5348fe3b3ff1444839c738b7c49c8236f0cae39e6a" HandleID="k8s-pod-network.33e15b3c35dd3294ddfb7b5348fe3b3ff1444839c738b7c49c8236f0cae39e6a" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-calico--apiserver--7b85fdb584--rpw9l-eth0" Mar 17 18:52:32.140227 env[1566]: 2025-03-17 18:52:32.131 [INFO][5391] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="33e15b3c35dd3294ddfb7b5348fe3b3ff1444839c738b7c49c8236f0cae39e6a" HandleID="k8s-pod-network.33e15b3c35dd3294ddfb7b5348fe3b3ff1444839c738b7c49c8236f0cae39e6a" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-calico--apiserver--7b85fdb584--rpw9l-eth0" Mar 17 18:52:32.140227 env[1566]: 2025-03-17 18:52:32.137 [INFO][5391] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:52:32.140227 env[1566]: 2025-03-17 18:52:32.138 [INFO][5384] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="33e15b3c35dd3294ddfb7b5348fe3b3ff1444839c738b7c49c8236f0cae39e6a" Mar 17 18:52:32.140739 env[1566]: time="2025-03-17T18:52:32.140703941Z" level=info msg="TearDown network for sandbox \"33e15b3c35dd3294ddfb7b5348fe3b3ff1444839c738b7c49c8236f0cae39e6a\" successfully" Mar 17 18:52:32.140808 env[1566]: time="2025-03-17T18:52:32.140793260Z" level=info msg="StopPodSandbox for \"33e15b3c35dd3294ddfb7b5348fe3b3ff1444839c738b7c49c8236f0cae39e6a\" returns successfully" Mar 17 18:52:32.141519 env[1566]: time="2025-03-17T18:52:32.141490527Z" level=info msg="RemovePodSandbox for \"33e15b3c35dd3294ddfb7b5348fe3b3ff1444839c738b7c49c8236f0cae39e6a\"" Mar 17 18:52:32.141610 env[1566]: time="2025-03-17T18:52:32.141527087Z" level=info msg="Forcibly stopping sandbox \"33e15b3c35dd3294ddfb7b5348fe3b3ff1444839c738b7c49c8236f0cae39e6a\"" Mar 17 18:52:32.211982 env[1566]: 2025-03-17 18:52:32.177 [WARNING][5409] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="33e15b3c35dd3294ddfb7b5348fe3b3ff1444839c738b7c49c8236f0cae39e6a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--a--c36c8d7be6-k8s-calico--apiserver--7b85fdb584--rpw9l-eth0", GenerateName:"calico-apiserver-7b85fdb584-", Namespace:"calico-apiserver", SelfLink:"", UID:"7edfd8b5-e741-4ba4-b5a0-38f1929fe58b", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 51, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b85fdb584", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-a-c36c8d7be6", ContainerID:"58da4f6e5fc27c8cee99933b7e9aaf356eed0af0485adfed2473be01b66d9934", Pod:"calico-apiserver-7b85fdb584-rpw9l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.10.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic078a3256b2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:52:32.211982 env[1566]: 2025-03-17 18:52:32.177 [INFO][5409] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="33e15b3c35dd3294ddfb7b5348fe3b3ff1444839c738b7c49c8236f0cae39e6a" Mar 17 18:52:32.211982 env[1566]: 2025-03-17 18:52:32.177 [INFO][5409] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="33e15b3c35dd3294ddfb7b5348fe3b3ff1444839c738b7c49c8236f0cae39e6a" iface="eth0" netns="" Mar 17 18:52:32.211982 env[1566]: 2025-03-17 18:52:32.177 [INFO][5409] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="33e15b3c35dd3294ddfb7b5348fe3b3ff1444839c738b7c49c8236f0cae39e6a" Mar 17 18:52:32.211982 env[1566]: 2025-03-17 18:52:32.177 [INFO][5409] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="33e15b3c35dd3294ddfb7b5348fe3b3ff1444839c738b7c49c8236f0cae39e6a" Mar 17 18:52:32.211982 env[1566]: 2025-03-17 18:52:32.197 [INFO][5416] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="33e15b3c35dd3294ddfb7b5348fe3b3ff1444839c738b7c49c8236f0cae39e6a" HandleID="k8s-pod-network.33e15b3c35dd3294ddfb7b5348fe3b3ff1444839c738b7c49c8236f0cae39e6a" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-calico--apiserver--7b85fdb584--rpw9l-eth0" Mar 17 18:52:32.211982 env[1566]: 2025-03-17 18:52:32.197 [INFO][5416] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:52:32.211982 env[1566]: 2025-03-17 18:52:32.197 [INFO][5416] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:52:32.211982 env[1566]: 2025-03-17 18:52:32.207 [WARNING][5416] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="33e15b3c35dd3294ddfb7b5348fe3b3ff1444839c738b7c49c8236f0cae39e6a" HandleID="k8s-pod-network.33e15b3c35dd3294ddfb7b5348fe3b3ff1444839c738b7c49c8236f0cae39e6a" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-calico--apiserver--7b85fdb584--rpw9l-eth0" Mar 17 18:52:32.211982 env[1566]: 2025-03-17 18:52:32.207 [INFO][5416] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="33e15b3c35dd3294ddfb7b5348fe3b3ff1444839c738b7c49c8236f0cae39e6a" HandleID="k8s-pod-network.33e15b3c35dd3294ddfb7b5348fe3b3ff1444839c738b7c49c8236f0cae39e6a" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-calico--apiserver--7b85fdb584--rpw9l-eth0" Mar 17 18:52:32.211982 env[1566]: 2025-03-17 18:52:32.209 [INFO][5416] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:52:32.211982 env[1566]: 2025-03-17 18:52:32.210 [INFO][5409] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="33e15b3c35dd3294ddfb7b5348fe3b3ff1444839c738b7c49c8236f0cae39e6a" Mar 17 18:52:32.212415 env[1566]: time="2025-03-17T18:52:32.212012089Z" level=info msg="TearDown network for sandbox \"33e15b3c35dd3294ddfb7b5348fe3b3ff1444839c738b7c49c8236f0cae39e6a\" successfully" Mar 17 18:52:32.227472 env[1566]: time="2025-03-17T18:52:32.227365740Z" level=info msg="RemovePodSandbox \"33e15b3c35dd3294ddfb7b5348fe3b3ff1444839c738b7c49c8236f0cae39e6a\" returns successfully" Mar 17 18:52:32.227989 env[1566]: time="2025-03-17T18:52:32.227966489Z" level=info msg="StopPodSandbox for \"e065047c826c98b68e6041689fa20f77b100c59a650cc57df87ad96e87c51334\"" Mar 17 18:52:32.296074 env[1566]: 2025-03-17 18:52:32.263 [WARNING][5434] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e065047c826c98b68e6041689fa20f77b100c59a650cc57df87ad96e87c51334" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--a--c36c8d7be6-k8s-coredns--7db6d8ff4d--d5v52-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"3418e3de-8e64-4cf5-99b4-25c5564ac718", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 51, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-a-c36c8d7be6", ContainerID:"e6f80ddc0cb0a442df8f9ae6d125267853e028506b5c30e2c1bf70381589f633", Pod:"coredns-7db6d8ff4d-d5v52", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.10.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali36c11c6a3a3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:52:32.296074 env[1566]: 2025-03-17 18:52:32.264 [INFO][5434] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e065047c826c98b68e6041689fa20f77b100c59a650cc57df87ad96e87c51334" Mar 17 18:52:32.296074 env[1566]: 2025-03-17 18:52:32.264 [INFO][5434] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e065047c826c98b68e6041689fa20f77b100c59a650cc57df87ad96e87c51334" iface="eth0" netns="" Mar 17 18:52:32.296074 env[1566]: 2025-03-17 18:52:32.264 [INFO][5434] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e065047c826c98b68e6041689fa20f77b100c59a650cc57df87ad96e87c51334" Mar 17 18:52:32.296074 env[1566]: 2025-03-17 18:52:32.264 [INFO][5434] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e065047c826c98b68e6041689fa20f77b100c59a650cc57df87ad96e87c51334" Mar 17 18:52:32.296074 env[1566]: 2025-03-17 18:52:32.281 [INFO][5440] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e065047c826c98b68e6041689fa20f77b100c59a650cc57df87ad96e87c51334" HandleID="k8s-pod-network.e065047c826c98b68e6041689fa20f77b100c59a650cc57df87ad96e87c51334" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-coredns--7db6d8ff4d--d5v52-eth0" Mar 17 18:52:32.296074 env[1566]: 2025-03-17 18:52:32.282 [INFO][5440] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:52:32.296074 env[1566]: 2025-03-17 18:52:32.282 [INFO][5440] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:52:32.296074 env[1566]: 2025-03-17 18:52:32.291 [WARNING][5440] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e065047c826c98b68e6041689fa20f77b100c59a650cc57df87ad96e87c51334" HandleID="k8s-pod-network.e065047c826c98b68e6041689fa20f77b100c59a650cc57df87ad96e87c51334" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-coredns--7db6d8ff4d--d5v52-eth0" Mar 17 18:52:32.296074 env[1566]: 2025-03-17 18:52:32.291 [INFO][5440] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e065047c826c98b68e6041689fa20f77b100c59a650cc57df87ad96e87c51334" HandleID="k8s-pod-network.e065047c826c98b68e6041689fa20f77b100c59a650cc57df87ad96e87c51334" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-coredns--7db6d8ff4d--d5v52-eth0" Mar 17 18:52:32.296074 env[1566]: 2025-03-17 18:52:32.293 [INFO][5440] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:52:32.296074 env[1566]: 2025-03-17 18:52:32.294 [INFO][5434] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e065047c826c98b68e6041689fa20f77b100c59a650cc57df87ad96e87c51334" Mar 17 18:52:32.296517 env[1566]: time="2025-03-17T18:52:32.296106493Z" level=info msg="TearDown network for sandbox \"e065047c826c98b68e6041689fa20f77b100c59a650cc57df87ad96e87c51334\" successfully" Mar 17 18:52:32.296517 env[1566]: time="2025-03-17T18:52:32.296137733Z" level=info msg="StopPodSandbox for \"e065047c826c98b68e6041689fa20f77b100c59a650cc57df87ad96e87c51334\" returns successfully" Mar 17 18:52:32.296745 env[1566]: time="2025-03-17T18:52:32.296720082Z" level=info msg="RemovePodSandbox for \"e065047c826c98b68e6041689fa20f77b100c59a650cc57df87ad96e87c51334\"" Mar 17 18:52:32.297003 env[1566]: time="2025-03-17T18:52:32.296919079Z" level=info msg="Forcibly stopping sandbox \"e065047c826c98b68e6041689fa20f77b100c59a650cc57df87ad96e87c51334\"" Mar 17 18:52:32.365196 env[1566]: 2025-03-17 18:52:32.332 [WARNING][5460] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e065047c826c98b68e6041689fa20f77b100c59a650cc57df87ad96e87c51334" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--a--c36c8d7be6-k8s-coredns--7db6d8ff4d--d5v52-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"3418e3de-8e64-4cf5-99b4-25c5564ac718", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 51, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-a-c36c8d7be6", ContainerID:"e6f80ddc0cb0a442df8f9ae6d125267853e028506b5c30e2c1bf70381589f633", Pod:"coredns-7db6d8ff4d-d5v52", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.10.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali36c11c6a3a3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:52:32.365196 env[1566]: 2025-03-17 18:52:32.333 [INFO][5460] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e065047c826c98b68e6041689fa20f77b100c59a650cc57df87ad96e87c51334" Mar 17 18:52:32.365196 env[1566]: 2025-03-17 18:52:32.333 [INFO][5460] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e065047c826c98b68e6041689fa20f77b100c59a650cc57df87ad96e87c51334" iface="eth0" netns="" Mar 17 18:52:32.365196 env[1566]: 2025-03-17 18:52:32.333 [INFO][5460] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e065047c826c98b68e6041689fa20f77b100c59a650cc57df87ad96e87c51334" Mar 17 18:52:32.365196 env[1566]: 2025-03-17 18:52:32.333 [INFO][5460] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e065047c826c98b68e6041689fa20f77b100c59a650cc57df87ad96e87c51334" Mar 17 18:52:32.365196 env[1566]: 2025-03-17 18:52:32.352 [INFO][5466] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e065047c826c98b68e6041689fa20f77b100c59a650cc57df87ad96e87c51334" HandleID="k8s-pod-network.e065047c826c98b68e6041689fa20f77b100c59a650cc57df87ad96e87c51334" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-coredns--7db6d8ff4d--d5v52-eth0" Mar 17 18:52:32.365196 env[1566]: 2025-03-17 18:52:32.352 [INFO][5466] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:52:32.365196 env[1566]: 2025-03-17 18:52:32.352 [INFO][5466] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:52:32.365196 env[1566]: 2025-03-17 18:52:32.360 [WARNING][5466] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e065047c826c98b68e6041689fa20f77b100c59a650cc57df87ad96e87c51334" HandleID="k8s-pod-network.e065047c826c98b68e6041689fa20f77b100c59a650cc57df87ad96e87c51334" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-coredns--7db6d8ff4d--d5v52-eth0" Mar 17 18:52:32.365196 env[1566]: 2025-03-17 18:52:32.361 [INFO][5466] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e065047c826c98b68e6041689fa20f77b100c59a650cc57df87ad96e87c51334" HandleID="k8s-pod-network.e065047c826c98b68e6041689fa20f77b100c59a650cc57df87ad96e87c51334" Workload="ci--3510.3.7--a--c36c8d7be6-k8s-coredns--7db6d8ff4d--d5v52-eth0" Mar 17 18:52:32.365196 env[1566]: 2025-03-17 18:52:32.362 [INFO][5466] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:52:32.365196 env[1566]: 2025-03-17 18:52:32.363 [INFO][5460] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e065047c826c98b68e6041689fa20f77b100c59a650cc57df87ad96e87c51334" Mar 17 18:52:32.365647 env[1566]: time="2025-03-17T18:52:32.365264239Z" level=info msg="TearDown network for sandbox \"e065047c826c98b68e6041689fa20f77b100c59a650cc57df87ad96e87c51334\" successfully" Mar 17 18:52:32.377359 env[1566]: time="2025-03-17T18:52:32.377311228Z" level=info msg="RemovePodSandbox \"e065047c826c98b68e6041689fa20f77b100c59a650cc57df87ad96e87c51334\" returns successfully" Mar 17 18:52:35.248799 systemd[1]: run-containerd-runc-k8s.io-5b09cf7369d1b40478e8ae3631e59ae69be41e1f9171ab3ce951bbc49e85a076-runc.kjJuzJ.mount: Deactivated successfully. Mar 17 18:52:53.592579 kubelet[2763]: I0317 18:52:53.592518 2763 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 18:52:53.633000 audit[5511]: NETFILTER_CFG table=filter:120 family=2 entries=8 op=nft_register_rule pid=5511 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:52:53.633000 audit[5511]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=ffffc1aca190 a2=0 a3=1 items=0 ppid=2898 pid=5511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:53.676835 kernel: audit: type=1325 audit(1742237573.633:423): table=filter:120 family=2 entries=8 op=nft_register_rule pid=5511 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:52:53.677064 kernel: audit: type=1300 audit(1742237573.633:423): arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=ffffc1aca190 a2=0 a3=1 items=0 ppid=2898 pid=5511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:53.633000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:52:53.690603 kernel: audit: type=1327 audit(1742237573.633:423): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:52:53.676000 audit[5511]: NETFILTER_CFG table=nat:121 family=2 entries=34 op=nft_register_chain pid=5511 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:52:53.704564 kernel: audit: type=1325 audit(1742237573.676:424): table=nat:121 family=2 entries=34 op=nft_register_chain pid=5511 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:52:53.676000 audit[5511]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11236 a0=3 a1=ffffc1aca190 a2=0 a3=1 items=0 ppid=2898 pid=5511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:53.731096 kernel: audit: type=1300 audit(1742237573.676:424): arch=c00000b7 syscall=211 success=yes exit=11236 a0=3 a1=ffffc1aca190 a2=0 a3=1 items=0 ppid=2898 pid=5511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:52:53.731204 kernel: audit: type=1327 audit(1742237573.676:424): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:52:53.676000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:53:00.348176 systemd[1]: run-containerd-runc-k8s.io-c095753d191fcbffbc8b75aece1c7488e9f3213681409fbc4369c81a4d9ec146-runc.iZtaBm.mount: Deactivated successfully. Mar 17 18:53:14.924392 systemd[1]: run-containerd-runc-k8s.io-5b09cf7369d1b40478e8ae3631e59ae69be41e1f9171ab3ce951bbc49e85a076-runc.CYzg4V.mount: Deactivated successfully. Mar 17 18:53:30.346831 systemd[1]: run-containerd-runc-k8s.io-c095753d191fcbffbc8b75aece1c7488e9f3213681409fbc4369c81a4d9ec146-runc.YUrdrJ.mount: Deactivated successfully. Mar 17 18:53:35.247710 systemd[1]: run-containerd-runc-k8s.io-5b09cf7369d1b40478e8ae3631e59ae69be41e1f9171ab3ce951bbc49e85a076-runc.OVmY5A.mount: Deactivated successfully. Mar 17 18:54:00.350186 systemd[1]: run-containerd-runc-k8s.io-c095753d191fcbffbc8b75aece1c7488e9f3213681409fbc4369c81a4d9ec146-runc.lKirKF.mount: Deactivated successfully. Mar 17 18:54:05.249527 systemd[1]: run-containerd-runc-k8s.io-5b09cf7369d1b40478e8ae3631e59ae69be41e1f9171ab3ce951bbc49e85a076-runc.VS2Iob.mount: Deactivated successfully. Mar 17 18:54:14.925326 systemd[1]: run-containerd-runc-k8s.io-5b09cf7369d1b40478e8ae3631e59ae69be41e1f9171ab3ce951bbc49e85a076-runc.UwTw97.mount: Deactivated successfully. Mar 17 18:54:30.352571 systemd[1]: run-containerd-runc-k8s.io-c095753d191fcbffbc8b75aece1c7488e9f3213681409fbc4369c81a4d9ec146-runc.nH5ShG.mount: Deactivated successfully. Mar 17 18:54:35.246596 systemd[1]: run-containerd-runc-k8s.io-5b09cf7369d1b40478e8ae3631e59ae69be41e1f9171ab3ce951bbc49e85a076-runc.baSkHq.mount: Deactivated successfully. Mar 17 18:55:00.353446 systemd[1]: run-containerd-runc-k8s.io-c095753d191fcbffbc8b75aece1c7488e9f3213681409fbc4369c81a4d9ec146-runc.ZfbIT7.mount: Deactivated successfully. Mar 17 18:55:05.247722 systemd[1]: run-containerd-runc-k8s.io-5b09cf7369d1b40478e8ae3631e59ae69be41e1f9171ab3ce951bbc49e85a076-runc.JJQYGo.mount: Deactivated successfully. Mar 17 18:55:11.522273 systemd[1]: Started sshd@7-10.200.20.12:22-10.200.16.10:49566.service. Mar 17 18:55:11.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.12:22-10.200.16.10:49566 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:55:11.558375 kernel: audit: type=1130 audit(1742237711.522:425): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.12:22-10.200.16.10:49566 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:55:11.981165 sshd[5826]: Accepted publickey for core from 10.200.16.10 port 49566 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:55:11.980000 audit[5826]: USER_ACCT pid=5826 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:11.983351 sshd[5826]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:55:11.980000 audit[5826]: CRED_ACQ pid=5826 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:12.030761 kernel: audit: type=1101 audit(1742237711.980:426): pid=5826 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:12.030885 kernel: audit: type=1103 audit(1742237711.980:427): pid=5826 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:12.046581 kernel: audit: type=1006 audit(1742237711.980:428): pid=5826 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Mar 17 18:55:12.051363 systemd-logind[1549]: New session 10 of user core. Mar 17 18:55:12.052054 systemd[1]: Started session-10.scope. Mar 17 18:55:11.980000 audit[5826]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffc4bff00 a2=3 a3=1 items=0 ppid=1 pid=5826 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:12.087015 kernel: audit: type=1300 audit(1742237711.980:428): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffc4bff00 a2=3 a3=1 items=0 ppid=1 pid=5826 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:12.087144 kernel: audit: type=1327 audit(1742237711.980:428): proctitle=737368643A20636F7265205B707269765D Mar 17 18:55:11.980000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:55:12.091000 audit[5826]: USER_START pid=5826 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:12.121535 kernel: audit: type=1105 audit(1742237712.091:429): pid=5826 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:12.121746 kernel: audit: type=1103 audit(1742237712.093:430): pid=5829 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:12.093000 audit[5829]: CRED_ACQ pid=5829 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:12.468146 sshd[5826]: pam_unix(sshd:session): session closed for user core Mar 17 18:55:12.468000 audit[5826]: USER_END pid=5826 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:12.497698 systemd[1]: sshd@7-10.200.20.12:22-10.200.16.10:49566.service: Deactivated successfully. Mar 17 18:55:12.499065 systemd[1]: session-10.scope: Deactivated successfully. Mar 17 18:55:12.499302 systemd-logind[1549]: Session 10 logged out. Waiting for processes to exit. Mar 17 18:55:12.468000 audit[5826]: CRED_DISP pid=5826 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:12.522692 kernel: audit: type=1106 audit(1742237712.468:431): pid=5826 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:12.522834 kernel: audit: type=1104 audit(1742237712.468:432): pid=5826 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:12.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.12:22-10.200.16.10:49566 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:55:12.524697 systemd-logind[1549]: Removed session 10. Mar 17 18:55:14.921460 systemd[1]: run-containerd-runc-k8s.io-5b09cf7369d1b40478e8ae3631e59ae69be41e1f9171ab3ce951bbc49e85a076-runc.bfPOHG.mount: Deactivated successfully. Mar 17 18:55:17.543831 systemd[1]: Started sshd@8-10.200.20.12:22-10.200.16.10:49568.service. Mar 17 18:55:17.572239 kernel: kauditd_printk_skb: 1 callbacks suppressed Mar 17 18:55:17.572274 kernel: audit: type=1130 audit(1742237717.543:434): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.12:22-10.200.16.10:49568 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:55:17.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.12:22-10.200.16.10:49568 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:55:17.975000 audit[5861]: USER_ACCT pid=5861 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:17.976570 sshd[5861]: Accepted publickey for core from 10.200.16.10 port 49568 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:55:18.000897 kernel: audit: type=1101 audit(1742237717.975:435): pid=5861 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:18.000000 audit[5861]: CRED_ACQ pid=5861 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:18.001558 sshd[5861]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:55:18.040090 kernel: audit: type=1103 audit(1742237718.000:436): pid=5861 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:18.040230 kernel: audit: type=1006 audit(1742237718.000:437): pid=5861 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Mar 17 18:55:18.000000 audit[5861]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd4daa770 a2=3 a3=1 items=0 ppid=1 pid=5861 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:18.066702 kernel: audit: type=1300 audit(1742237718.000:437): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd4daa770 a2=3 a3=1 items=0 ppid=1 pid=5861 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:18.069072 kernel: audit: type=1327 audit(1742237718.000:437): proctitle=737368643A20636F7265205B707269765D Mar 17 18:55:18.000000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:55:18.070417 systemd[1]: Started session-11.scope. Mar 17 18:55:18.071107 systemd-logind[1549]: New session 11 of user core. Mar 17 18:55:18.076000 audit[5861]: USER_START pid=5861 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:18.076000 audit[5864]: CRED_ACQ pid=5864 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:18.108047 kernel: audit: type=1105 audit(1742237718.076:438): pid=5861 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:18.130920 kernel: audit: type=1103 audit(1742237718.076:439): pid=5864 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:18.414133 sshd[5861]: pam_unix(sshd:session): session closed for user core Mar 17 18:55:18.414000 audit[5861]: USER_END pid=5861 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:18.419090 systemd[1]: sshd@8-10.200.20.12:22-10.200.16.10:49568.service: Deactivated successfully. Mar 17 18:55:18.419989 systemd[1]: session-11.scope: Deactivated successfully. Mar 17 18:55:18.443686 systemd-logind[1549]: Session 11 logged out. Waiting for processes to exit. Mar 17 18:55:18.444736 systemd-logind[1549]: Removed session 11. Mar 17 18:55:18.416000 audit[5861]: CRED_DISP pid=5861 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:18.468890 kernel: audit: type=1106 audit(1742237718.414:440): pid=5861 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:18.469004 kernel: audit: type=1104 audit(1742237718.416:441): pid=5861 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:18.418000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.12:22-10.200.16.10:49568 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:55:23.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.12:22-10.200.16.10:50986 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:55:23.490815 systemd[1]: Started sshd@9-10.200.20.12:22-10.200.16.10:50986.service. Mar 17 18:55:23.496098 kernel: kauditd_printk_skb: 1 callbacks suppressed Mar 17 18:55:23.496190 kernel: audit: type=1130 audit(1742237723.490:443): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.12:22-10.200.16.10:50986 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:55:23.966000 audit[5874]: USER_ACCT pid=5874 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:23.967407 sshd[5874]: Accepted publickey for core from 10.200.16.10 port 50986 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:55:23.991904 kernel: audit: type=1101 audit(1742237723.966:444): pid=5874 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:23.991000 audit[5874]: CRED_ACQ pid=5874 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:23.992905 sshd[5874]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:55:24.029888 kernel: audit: type=1103 audit(1742237723.991:445): pid=5874 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:24.029987 kernel: audit: type=1006 audit(1742237723.991:446): pid=5874 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Mar 17 18:55:23.991000 audit[5874]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe0221230 a2=3 a3=1 items=0 ppid=1 pid=5874 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:24.054519 kernel: audit: type=1300 audit(1742237723.991:446): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe0221230 a2=3 a3=1 items=0 ppid=1 pid=5874 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:23.991000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:55:24.065901 kernel: audit: type=1327 audit(1742237723.991:446): proctitle=737368643A20636F7265205B707269765D Mar 17 18:55:24.068687 systemd[1]: Started session-12.scope. Mar 17 18:55:24.069211 systemd-logind[1549]: New session 12 of user core. Mar 17 18:55:24.072000 audit[5874]: USER_START pid=5874 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:24.074000 audit[5877]: CRED_ACQ pid=5877 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:24.127107 kernel: audit: type=1105 audit(1742237724.072:447): pid=5874 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:24.127213 kernel: audit: type=1103 audit(1742237724.074:448): pid=5877 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:24.408465 sshd[5874]: pam_unix(sshd:session): session closed for user core Mar 17 18:55:24.409000 audit[5874]: USER_END pid=5874 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:24.411629 systemd[1]: sshd@9-10.200.20.12:22-10.200.16.10:50986.service: Deactivated successfully. Mar 17 18:55:24.412490 systemd[1]: session-12.scope: Deactivated successfully. Mar 17 18:55:24.409000 audit[5874]: CRED_DISP pid=5874 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:24.461088 kernel: audit: type=1106 audit(1742237724.409:449): pid=5874 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:24.461227 kernel: audit: type=1104 audit(1742237724.409:450): pid=5874 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:24.464587 systemd-logind[1549]: Session 12 logged out. Waiting for processes to exit. Mar 17 18:55:24.411000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.12:22-10.200.16.10:50986 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:55:24.465727 systemd-logind[1549]: Removed session 12. Mar 17 18:55:29.479611 systemd[1]: Started sshd@10-10.200.20.12:22-10.200.16.10:47030.service. Mar 17 18:55:29.479000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.20.12:22-10.200.16.10:47030 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:55:29.485235 kernel: kauditd_printk_skb: 1 callbacks suppressed Mar 17 18:55:29.485309 kernel: audit: type=1130 audit(1742237729.479:452): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.20.12:22-10.200.16.10:47030 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:55:29.933000 audit[5889]: USER_ACCT pid=5889 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:29.934932 sshd[5889]: Accepted publickey for core from 10.200.16.10 port 47030 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:55:29.936850 sshd[5889]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:55:29.935000 audit[5889]: CRED_ACQ pid=5889 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:29.981569 kernel: audit: type=1101 audit(1742237729.933:453): pid=5889 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:29.981655 kernel: audit: type=1103 audit(1742237729.935:454): pid=5889 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:29.997717 kernel: audit: type=1006 audit(1742237729.935:455): pid=5889 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Mar 17 18:55:29.935000 audit[5889]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd8b91b30 a2=3 a3=1 items=0 ppid=1 pid=5889 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:30.023323 kernel: audit: type=1300 audit(1742237729.935:455): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd8b91b30 a2=3 a3=1 items=0 ppid=1 pid=5889 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:30.000760 systemd-logind[1549]: New session 13 of user core. Mar 17 18:55:30.001442 systemd[1]: Started session-13.scope. Mar 17 18:55:29.935000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:55:30.032331 kernel: audit: type=1327 audit(1742237729.935:455): proctitle=737368643A20636F7265205B707269765D Mar 17 18:55:30.032000 audit[5889]: USER_START pid=5889 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:30.062269 kernel: audit: type=1105 audit(1742237730.032:456): pid=5889 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:30.063227 kernel: audit: type=1103 audit(1742237730.062:457): pid=5892 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:30.062000 audit[5892]: CRED_ACQ pid=5892 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:30.347010 systemd[1]: run-containerd-runc-k8s.io-c095753d191fcbffbc8b75aece1c7488e9f3213681409fbc4369c81a4d9ec146-runc.PzRWoU.mount: Deactivated successfully. Mar 17 18:55:30.396435 sshd[5889]: pam_unix(sshd:session): session closed for user core Mar 17 18:55:30.396000 audit[5889]: USER_END pid=5889 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:30.424537 systemd[1]: sshd@10-10.200.20.12:22-10.200.16.10:47030.service: Deactivated successfully. Mar 17 18:55:30.425381 systemd[1]: session-13.scope: Deactivated successfully. Mar 17 18:55:30.396000 audit[5889]: CRED_DISP pid=5889 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:30.426451 systemd-logind[1549]: Session 13 logged out. Waiting for processes to exit. Mar 17 18:55:30.448452 kernel: audit: type=1106 audit(1742237730.396:458): pid=5889 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:30.448576 kernel: audit: type=1104 audit(1742237730.396:459): pid=5889 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:30.423000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.20.12:22-10.200.16.10:47030 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:55:30.449976 systemd-logind[1549]: Removed session 13. Mar 17 18:55:30.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.20.12:22-10.200.16.10:47046 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:55:30.467084 systemd[1]: Started sshd@11-10.200.20.12:22-10.200.16.10:47046.service. Mar 17 18:55:30.915000 audit[5925]: USER_ACCT pid=5925 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:30.916442 sshd[5925]: Accepted publickey for core from 10.200.16.10 port 47046 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:55:30.916000 audit[5925]: CRED_ACQ pid=5925 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:30.916000 audit[5925]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff0330520 a2=3 a3=1 items=0 ppid=1 pid=5925 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:30.916000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:55:30.917792 sshd[5925]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:55:30.921803 systemd-logind[1549]: New session 14 of user core. Mar 17 18:55:30.922331 systemd[1]: Started session-14.scope. Mar 17 18:55:30.926000 audit[5925]: USER_START pid=5925 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:30.927000 audit[5928]: CRED_ACQ pid=5928 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:31.356374 sshd[5925]: pam_unix(sshd:session): session closed for user core Mar 17 18:55:31.356000 audit[5925]: USER_END pid=5925 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:31.356000 audit[5925]: CRED_DISP pid=5925 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:31.358890 systemd-logind[1549]: Session 14 logged out. Waiting for processes to exit. Mar 17 18:55:31.359219 systemd[1]: sshd@11-10.200.20.12:22-10.200.16.10:47046.service: Deactivated successfully. Mar 17 18:55:31.358000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.20.12:22-10.200.16.10:47046 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:55:31.360118 systemd[1]: session-14.scope: Deactivated successfully. Mar 17 18:55:31.362146 systemd-logind[1549]: Removed session 14. Mar 17 18:55:31.424662 systemd[1]: Started sshd@12-10.200.20.12:22-10.200.16.10:47060.service. Mar 17 18:55:31.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.12:22-10.200.16.10:47060 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:55:31.835000 audit[5940]: USER_ACCT pid=5940 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:31.836201 sshd[5940]: Accepted publickey for core from 10.200.16.10 port 47060 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:55:31.836000 audit[5940]: CRED_ACQ pid=5940 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:31.836000 audit[5940]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffce0cd420 a2=3 a3=1 items=0 ppid=1 pid=5940 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:31.836000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:55:31.837877 sshd[5940]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:55:31.843445 systemd[1]: Started session-15.scope. Mar 17 18:55:31.844512 systemd-logind[1549]: New session 15 of user core. Mar 17 18:55:31.848000 audit[5940]: USER_START pid=5940 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:31.849000 audit[5957]: CRED_ACQ pid=5957 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:32.204259 sshd[5940]: pam_unix(sshd:session): session closed for user core Mar 17 18:55:32.204000 audit[5940]: USER_END pid=5940 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:32.204000 audit[5940]: CRED_DISP pid=5940 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:32.207393 systemd-logind[1549]: Session 15 logged out. Waiting for processes to exit. Mar 17 18:55:32.208209 systemd[1]: sshd@12-10.200.20.12:22-10.200.16.10:47060.service: Deactivated successfully. Mar 17 18:55:32.209114 systemd[1]: session-15.scope: Deactivated successfully. Mar 17 18:55:32.207000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.12:22-10.200.16.10:47060 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:55:32.209515 systemd-logind[1549]: Removed session 15. Mar 17 18:55:37.279213 systemd[1]: Started sshd@13-10.200.20.12:22-10.200.16.10:47070.service. Mar 17 18:55:37.306946 kernel: kauditd_printk_skb: 23 callbacks suppressed Mar 17 18:55:37.307103 kernel: audit: type=1130 audit(1742237737.278:479): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.12:22-10.200.16.10:47070 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:55:37.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.12:22-10.200.16.10:47070 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:55:37.710000 audit[5989]: USER_ACCT pid=5989 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:37.711669 sshd[5989]: Accepted publickey for core from 10.200.16.10 port 47070 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:55:37.713689 sshd[5989]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:55:37.712000 audit[5989]: CRED_ACQ pid=5989 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:37.757263 kernel: audit: type=1101 audit(1742237737.710:480): pid=5989 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:37.757374 kernel: audit: type=1103 audit(1742237737.712:481): pid=5989 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:37.770787 kernel: audit: type=1006 audit(1742237737.712:482): pid=5989 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Mar 17 18:55:37.712000 audit[5989]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcf927870 a2=3 a3=1 items=0 ppid=1 pid=5989 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:37.794735 kernel: audit: type=1300 audit(1742237737.712:482): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcf927870 a2=3 a3=1 items=0 ppid=1 pid=5989 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:37.712000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:55:37.803560 kernel: audit: type=1327 audit(1742237737.712:482): proctitle=737368643A20636F7265205B707269765D Mar 17 18:55:37.806061 systemd-logind[1549]: New session 16 of user core. Mar 17 18:55:37.806442 systemd[1]: Started session-16.scope. Mar 17 18:55:37.810000 audit[5989]: USER_START pid=5989 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:37.837000 audit[5992]: CRED_ACQ pid=5992 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:37.859616 kernel: audit: type=1105 audit(1742237737.810:483): pid=5989 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:37.859770 kernel: audit: type=1103 audit(1742237737.837:484): pid=5992 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:38.144694 sshd[5989]: pam_unix(sshd:session): session closed for user core Mar 17 18:55:38.145000 audit[5989]: USER_END pid=5989 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:38.150000 audit[5989]: CRED_DISP pid=5989 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:38.174775 systemd[1]: sshd@13-10.200.20.12:22-10.200.16.10:47070.service: Deactivated successfully. Mar 17 18:55:38.177151 systemd[1]: session-16.scope: Deactivated successfully. Mar 17 18:55:38.177750 systemd-logind[1549]: Session 16 logged out. Waiting for processes to exit. Mar 17 18:55:38.178963 systemd-logind[1549]: Removed session 16. Mar 17 18:55:38.197959 kernel: audit: type=1106 audit(1742237738.145:485): pid=5989 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:38.198076 kernel: audit: type=1104 audit(1742237738.150:486): pid=5989 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:38.174000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.12:22-10.200.16.10:47070 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:55:43.222982 systemd[1]: Started sshd@14-10.200.20.12:22-10.200.16.10:58414.service. Mar 17 18:55:43.249645 kernel: kauditd_printk_skb: 1 callbacks suppressed Mar 17 18:55:43.249730 kernel: audit: type=1130 audit(1742237743.222:488): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.12:22-10.200.16.10:58414 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:55:43.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.12:22-10.200.16.10:58414 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:55:43.698000 audit[6002]: USER_ACCT pid=6002 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:43.699510 sshd[6002]: Accepted publickey for core from 10.200.16.10 port 58414 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:55:43.701277 sshd[6002]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:55:43.700000 audit[6002]: CRED_ACQ pid=6002 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:43.744797 kernel: audit: type=1101 audit(1742237743.698:489): pid=6002 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:43.744980 kernel: audit: type=1103 audit(1742237743.700:490): pid=6002 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:43.759266 kernel: audit: type=1006 audit(1742237743.700:491): pid=6002 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Mar 17 18:55:43.700000 audit[6002]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdaeb5470 a2=3 a3=1 items=0 ppid=1 pid=6002 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:43.783737 kernel: audit: type=1300 audit(1742237743.700:491): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdaeb5470 a2=3 a3=1 items=0 ppid=1 pid=6002 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:43.700000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:55:43.792062 kernel: audit: type=1327 audit(1742237743.700:491): proctitle=737368643A20636F7265205B707269765D Mar 17 18:55:43.795406 systemd[1]: Started session-17.scope. Mar 17 18:55:43.795738 systemd-logind[1549]: New session 17 of user core. Mar 17 18:55:43.799000 audit[6002]: USER_START pid=6002 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:43.827452 kernel: audit: type=1105 audit(1742237743.799:492): pid=6002 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:43.827535 kernel: audit: type=1103 audit(1742237743.826:493): pid=6005 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:43.826000 audit[6005]: CRED_ACQ pid=6005 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:44.172121 sshd[6002]: pam_unix(sshd:session): session closed for user core Mar 17 18:55:44.172000 audit[6002]: USER_END pid=6002 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:44.173000 audit[6002]: CRED_DISP pid=6002 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:44.200207 systemd[1]: sshd@14-10.200.20.12:22-10.200.16.10:58414.service: Deactivated successfully. Mar 17 18:55:44.200986 systemd[1]: session-17.scope: Deactivated successfully. Mar 17 18:55:44.222273 kernel: audit: type=1106 audit(1742237744.172:494): pid=6002 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:44.222381 kernel: audit: type=1104 audit(1742237744.173:495): pid=6002 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:44.199000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.12:22-10.200.16.10:58414 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:55:44.223113 systemd-logind[1549]: Session 17 logged out. Waiting for processes to exit. Mar 17 18:55:44.223843 systemd-logind[1549]: Removed session 17. Mar 17 18:55:49.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.12:22-10.200.16.10:56634 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:55:49.242714 systemd[1]: Started sshd@15-10.200.20.12:22-10.200.16.10:56634.service. Mar 17 18:55:49.248899 kernel: kauditd_printk_skb: 1 callbacks suppressed Mar 17 18:55:49.248990 kernel: audit: type=1130 audit(1742237749.241:497): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.12:22-10.200.16.10:56634 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:55:49.974349 update_engine[1550]: I0317 18:55:49.974301 1550 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 17 18:55:49.974349 update_engine[1550]: I0317 18:55:49.974335 1550 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 17 18:55:49.975819 update_engine[1550]: I0317 18:55:49.975630 1550 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 17 18:55:49.976238 update_engine[1550]: I0317 18:55:49.976121 1550 omaha_request_params.cc:62] Current group set to lts Mar 17 18:55:49.977619 update_engine[1550]: I0317 18:55:49.977439 1550 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 17 18:55:49.977619 update_engine[1550]: I0317 18:55:49.977450 1550 update_attempter.cc:643] Scheduling an action processor start. Mar 17 18:55:49.977619 update_engine[1550]: I0317 18:55:49.977468 1550 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 17 18:55:49.977619 update_engine[1550]: I0317 18:55:49.977497 1550 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 17 18:55:49.979660 locksmithd[1644]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 17 18:55:49.996117 update_engine[1550]: I0317 18:55:49.996034 1550 omaha_request_action.cc:270] Posting an Omaha request to disabled Mar 17 18:55:49.996117 update_engine[1550]: I0317 18:55:49.996057 1550 omaha_request_action.cc:271] Request: Mar 17 18:55:49.996117 update_engine[1550]: Mar 17 18:55:49.996117 update_engine[1550]: Mar 17 18:55:49.996117 update_engine[1550]: Mar 17 18:55:49.996117 update_engine[1550]: Mar 17 18:55:49.996117 update_engine[1550]: Mar 17 18:55:49.996117 update_engine[1550]: Mar 17 18:55:49.996117 update_engine[1550]: Mar 17 18:55:49.996117 update_engine[1550]: Mar 17 18:55:49.996117 update_engine[1550]: I0317 18:55:49.996062 1550 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 18:55:50.001000 audit[6017]: USER_ACCT pid=6017 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:50.004946 sshd[6017]: Accepted publickey for core from 10.200.16.10 port 56634 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:55:50.006783 sshd[6017]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:55:50.001000 audit[6017]: CRED_ACQ pid=6017 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:50.055990 kernel: audit: type=1101 audit(1742237750.001:498): pid=6017 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:50.056337 kernel: audit: type=1103 audit(1742237750.001:499): pid=6017 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:50.072599 kernel: audit: type=1006 audit(1742237750.001:500): pid=6017 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Mar 17 18:55:50.073181 kernel: audit: type=1300 audit(1742237750.001:500): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff79b46e0 a2=3 a3=1 items=0 ppid=1 pid=6017 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:50.001000 audit[6017]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff79b46e0 a2=3 a3=1 items=0 ppid=1 pid=6017 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:50.101703 update_engine[1550]: I0317 18:55:50.101627 1550 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 18:55:50.001000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:55:50.102493 update_engine[1550]: I0317 18:55:50.102416 1550 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 18:55:50.110767 kernel: audit: type=1327 audit(1742237750.001:500): proctitle=737368643A20636F7265205B707269765D Mar 17 18:55:50.114468 systemd[1]: Started session-18.scope. Mar 17 18:55:50.115395 systemd-logind[1549]: New session 18 of user core. Mar 17 18:55:50.120000 audit[6017]: USER_START pid=6017 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:50.121000 audit[6020]: CRED_ACQ pid=6020 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:50.176332 kernel: audit: type=1105 audit(1742237750.120:501): pid=6017 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:50.176442 kernel: audit: type=1103 audit(1742237750.121:502): pid=6020 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:50.211217 update_engine[1550]: E0317 18:55:50.211045 1550 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 18:55:50.211217 update_engine[1550]: I0317 18:55:50.211182 1550 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Mar 17 18:55:50.471926 sshd[6017]: pam_unix(sshd:session): session closed for user core Mar 17 18:55:50.470000 audit[6017]: USER_END pid=6017 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:50.475353 systemd[1]: sshd@15-10.200.20.12:22-10.200.16.10:56634.service: Deactivated successfully. Mar 17 18:55:50.476201 systemd[1]: session-18.scope: Deactivated successfully. Mar 17 18:55:50.471000 audit[6017]: CRED_DISP pid=6017 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:50.502062 systemd-logind[1549]: Session 18 logged out. Waiting for processes to exit. Mar 17 18:55:50.503061 systemd-logind[1549]: Removed session 18. Mar 17 18:55:50.524209 kernel: audit: type=1106 audit(1742237750.470:503): pid=6017 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:50.524317 kernel: audit: type=1104 audit(1742237750.471:504): pid=6017 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:50.473000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.12:22-10.200.16.10:56634 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:55:55.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.20.12:22-10.200.16.10:56648 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:55:55.535282 systemd[1]: Started sshd@16-10.200.20.12:22-10.200.16.10:56648.service. Mar 17 18:55:55.540511 kernel: kauditd_printk_skb: 1 callbacks suppressed Mar 17 18:55:55.540561 kernel: audit: type=1130 audit(1742237755.533:506): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.20.12:22-10.200.16.10:56648 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:55:55.948000 audit[6029]: USER_ACCT pid=6029 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:55.949752 sshd[6029]: Accepted publickey for core from 10.200.16.10 port 56648 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:55:55.975918 kernel: audit: type=1101 audit(1742237755.948:507): pid=6029 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:55.974000 audit[6029]: CRED_ACQ pid=6029 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:55.976732 sshd[6029]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:55:55.981851 systemd[1]: Started session-19.scope. Mar 17 18:55:55.982510 systemd-logind[1549]: New session 19 of user core. Mar 17 18:55:56.015769 kernel: audit: type=1103 audit(1742237755.974:508): pid=6029 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:56.015934 kernel: audit: type=1006 audit(1742237755.974:509): pid=6029 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Mar 17 18:55:55.974000 audit[6029]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffeeb24720 a2=3 a3=1 items=0 ppid=1 pid=6029 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:56.042297 kernel: audit: type=1300 audit(1742237755.974:509): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffeeb24720 a2=3 a3=1 items=0 ppid=1 pid=6029 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:55.974000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:55:56.051927 kernel: audit: type=1327 audit(1742237755.974:509): proctitle=737368643A20636F7265205B707269765D Mar 17 18:55:56.052024 kernel: audit: type=1105 audit(1742237755.998:510): pid=6029 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:55.998000 audit[6029]: USER_START pid=6029 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:56.087225 kernel: audit: type=1103 audit(1742237756.001:511): pid=6032 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:56.001000 audit[6032]: CRED_ACQ pid=6032 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:56.341695 sshd[6029]: pam_unix(sshd:session): session closed for user core Mar 17 18:55:56.341000 audit[6029]: USER_END pid=6029 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:56.345604 systemd-logind[1549]: Session 19 logged out. Waiting for processes to exit. Mar 17 18:55:56.346892 systemd[1]: sshd@16-10.200.20.12:22-10.200.16.10:56648.service: Deactivated successfully. Mar 17 18:55:56.347724 systemd[1]: session-19.scope: Deactivated successfully. Mar 17 18:55:56.348949 systemd-logind[1549]: Removed session 19. Mar 17 18:55:56.341000 audit[6029]: CRED_DISP pid=6029 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:56.392244 kernel: audit: type=1106 audit(1742237756.341:512): pid=6029 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:56.395033 kernel: audit: type=1104 audit(1742237756.341:513): pid=6029 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:56.346000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.20.12:22-10.200.16.10:56648 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:55:56.422322 systemd[1]: Started sshd@17-10.200.20.12:22-10.200.16.10:56660.service. Mar 17 18:55:56.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.20.12:22-10.200.16.10:56660 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:55:56.897000 audit[6042]: USER_ACCT pid=6042 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:56.898199 sshd[6042]: Accepted publickey for core from 10.200.16.10 port 56660 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:55:56.898000 audit[6042]: CRED_ACQ pid=6042 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:56.898000 audit[6042]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe3753d60 a2=3 a3=1 items=0 ppid=1 pid=6042 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:56.898000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:55:56.899801 sshd[6042]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:55:56.904370 systemd[1]: Started session-20.scope. Mar 17 18:55:56.904553 systemd-logind[1549]: New session 20 of user core. Mar 17 18:55:56.912000 audit[6042]: USER_START pid=6042 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:56.913000 audit[6045]: CRED_ACQ pid=6045 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:57.413636 sshd[6042]: pam_unix(sshd:session): session closed for user core Mar 17 18:55:57.413000 audit[6042]: USER_END pid=6042 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:57.413000 audit[6042]: CRED_DISP pid=6042 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:57.416210 systemd-logind[1549]: Session 20 logged out. Waiting for processes to exit. Mar 17 18:55:57.416000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.20.12:22-10.200.16.10:56660 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:55:57.416818 systemd[1]: sshd@17-10.200.20.12:22-10.200.16.10:56660.service: Deactivated successfully. Mar 17 18:55:57.418002 systemd[1]: session-20.scope: Deactivated successfully. Mar 17 18:55:57.418704 systemd-logind[1549]: Removed session 20. Mar 17 18:55:57.483102 systemd[1]: Started sshd@18-10.200.20.12:22-10.200.16.10:56670.service. Mar 17 18:55:57.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.20.12:22-10.200.16.10:56670 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:55:57.931000 audit[6052]: USER_ACCT pid=6052 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:57.933505 sshd[6052]: Accepted publickey for core from 10.200.16.10 port 56670 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:55:57.932000 audit[6052]: CRED_ACQ pid=6052 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:57.932000 audit[6052]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc04131a0 a2=3 a3=1 items=0 ppid=1 pid=6052 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:57.932000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:55:57.933909 sshd[6052]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:55:57.937788 systemd-logind[1549]: New session 21 of user core. Mar 17 18:55:57.938266 systemd[1]: Started session-21.scope. Mar 17 18:55:57.941000 audit[6052]: USER_START pid=6052 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:57.943000 audit[6055]: CRED_ACQ pid=6055 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:55:59.943000 audit[6065]: NETFILTER_CFG table=filter:122 family=2 entries=20 op=nft_register_rule pid=6065 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:55:59.943000 audit[6065]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11860 a0=3 a1=ffffed2cf790 a2=0 a3=1 items=0 ppid=2898 pid=6065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:59.943000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:55:59.948000 audit[6065]: NETFILTER_CFG table=nat:123 family=2 entries=22 op=nft_register_rule pid=6065 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:55:59.948000 audit[6065]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6540 a0=3 a1=ffffed2cf790 a2=0 a3=1 items=0 ppid=2898 pid=6065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:59.948000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:55:59.958000 audit[6067]: NETFILTER_CFG table=filter:124 family=2 entries=32 op=nft_register_rule pid=6067 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:55:59.958000 audit[6067]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11860 a0=3 a1=ffffe81706b0 a2=0 a3=1 items=0 ppid=2898 pid=6067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:59.958000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:55:59.961000 audit[6067]: NETFILTER_CFG table=nat:125 family=2 entries=22 op=nft_register_rule pid=6067 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:55:59.961000 audit[6067]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6540 a0=3 a1=ffffe81706b0 a2=0 a3=1 items=0 ppid=2898 pid=6067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:59.961000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:55:59.972967 update_engine[1550]: I0317 18:55:59.972927 1550 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 18:55:59.973294 update_engine[1550]: I0317 18:55:59.973095 1550 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 18:55:59.973294 update_engine[1550]: I0317 18:55:59.973248 1550 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 18:55:59.984983 update_engine[1550]: E0317 18:55:59.984951 1550 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 18:55:59.985081 update_engine[1550]: I0317 18:55:59.985046 1550 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Mar 17 18:56:00.034971 sshd[6052]: pam_unix(sshd:session): session closed for user core Mar 17 18:56:00.035000 audit[6052]: USER_END pid=6052 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:00.035000 audit[6052]: CRED_DISP pid=6052 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:00.037000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.20.12:22-10.200.16.10:56670 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:56:00.037675 systemd-logind[1549]: Session 21 logged out. Waiting for processes to exit. Mar 17 18:56:00.037803 systemd[1]: sshd@18-10.200.20.12:22-10.200.16.10:56670.service: Deactivated successfully. Mar 17 18:56:00.038727 systemd[1]: session-21.scope: Deactivated successfully. Mar 17 18:56:00.039213 systemd-logind[1549]: Removed session 21. Mar 17 18:56:00.111839 systemd[1]: Started sshd@19-10.200.20.12:22-10.200.16.10:58732.service. Mar 17 18:56:00.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.12:22-10.200.16.10:58732 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:56:00.349239 systemd[1]: run-containerd-runc-k8s.io-c095753d191fcbffbc8b75aece1c7488e9f3213681409fbc4369c81a4d9ec146-runc.2Z3Pg9.mount: Deactivated successfully. Mar 17 18:56:00.566000 audit[6070]: USER_ACCT pid=6070 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:00.568020 sshd[6070]: Accepted publickey for core from 10.200.16.10 port 58732 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:56:00.572219 kernel: kauditd_printk_skb: 36 callbacks suppressed Mar 17 18:56:00.572317 kernel: audit: type=1101 audit(1742237760.566:538): pid=6070 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:00.572991 sshd[6070]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:56:00.571000 audit[6070]: CRED_ACQ pid=6070 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:00.623016 kernel: audit: type=1103 audit(1742237760.571:539): pid=6070 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:00.638352 kernel: audit: type=1006 audit(1742237760.571:540): pid=6070 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Mar 17 18:56:00.638457 kernel: audit: type=1300 audit(1742237760.571:540): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe7b29b80 a2=3 a3=1 items=0 ppid=1 pid=6070 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:00.571000 audit[6070]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe7b29b80 a2=3 a3=1 items=0 ppid=1 pid=6070 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:00.571000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:56:00.672469 kernel: audit: type=1327 audit(1742237760.571:540): proctitle=737368643A20636F7265205B707269765D Mar 17 18:56:00.675185 systemd[1]: Started session-22.scope. Mar 17 18:56:00.676317 systemd-logind[1549]: New session 22 of user core. Mar 17 18:56:00.681000 audit[6070]: USER_START pid=6070 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:00.681000 audit[6095]: CRED_ACQ pid=6095 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:00.732765 kernel: audit: type=1105 audit(1742237760.681:541): pid=6070 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:00.732844 kernel: audit: type=1103 audit(1742237760.681:542): pid=6095 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:01.115086 sshd[6070]: pam_unix(sshd:session): session closed for user core Mar 17 18:56:01.115000 audit[6070]: USER_END pid=6070 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:01.115000 audit[6070]: CRED_DISP pid=6070 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:01.147411 systemd[1]: sshd@19-10.200.20.12:22-10.200.16.10:58732.service: Deactivated successfully. Mar 17 18:56:01.148241 systemd[1]: session-22.scope: Deactivated successfully. Mar 17 18:56:01.174286 kernel: audit: type=1106 audit(1742237761.115:543): pid=6070 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:01.174426 kernel: audit: type=1104 audit(1742237761.115:544): pid=6070 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:01.146000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.12:22-10.200.16.10:58732 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:56:01.207881 kernel: audit: type=1131 audit(1742237761.146:545): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.12:22-10.200.16.10:58732 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:56:01.208129 systemd-logind[1549]: Session 22 logged out. Waiting for processes to exit. Mar 17 18:56:01.209635 systemd-logind[1549]: Removed session 22. Mar 17 18:56:01.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.12:22-10.200.16.10:58744 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:56:01.218658 systemd[1]: Started sshd@20-10.200.20.12:22-10.200.16.10:58744.service. Mar 17 18:56:01.648000 audit[6103]: USER_ACCT pid=6103 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:01.650021 sshd[6103]: Accepted publickey for core from 10.200.16.10 port 58744 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:56:01.650000 audit[6103]: CRED_ACQ pid=6103 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:01.650000 audit[6103]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd5bf3770 a2=3 a3=1 items=0 ppid=1 pid=6103 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:01.650000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:56:01.651734 sshd[6103]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:56:01.656134 systemd[1]: Started session-23.scope. Mar 17 18:56:01.656925 systemd-logind[1549]: New session 23 of user core. Mar 17 18:56:01.660000 audit[6103]: USER_START pid=6103 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:01.662000 audit[6106]: CRED_ACQ pid=6106 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:02.050448 sshd[6103]: pam_unix(sshd:session): session closed for user core Mar 17 18:56:02.050000 audit[6103]: USER_END pid=6103 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:02.050000 audit[6103]: CRED_DISP pid=6103 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:02.053101 systemd[1]: sshd@20-10.200.20.12:22-10.200.16.10:58744.service: Deactivated successfully. Mar 17 18:56:02.052000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.12:22-10.200.16.10:58744 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:56:02.053962 systemd[1]: session-23.scope: Deactivated successfully. Mar 17 18:56:02.055026 systemd-logind[1549]: Session 23 logged out. Waiting for processes to exit. Mar 17 18:56:02.055747 systemd-logind[1549]: Removed session 23. Mar 17 18:56:05.250418 systemd[1]: run-containerd-runc-k8s.io-5b09cf7369d1b40478e8ae3631e59ae69be41e1f9171ab3ce951bbc49e85a076-runc.x4QPA2.mount: Deactivated successfully. Mar 17 18:56:05.573000 audit[6138]: NETFILTER_CFG table=filter:126 family=2 entries=20 op=nft_register_rule pid=6138 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:56:05.580079 kernel: kauditd_printk_skb: 11 callbacks suppressed Mar 17 18:56:05.580205 kernel: audit: type=1325 audit(1742237765.573:555): table=filter:126 family=2 entries=20 op=nft_register_rule pid=6138 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:56:05.573000 audit[6138]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=ffffce011f00 a2=0 a3=1 items=0 ppid=2898 pid=6138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:05.625360 kernel: audit: type=1300 audit(1742237765.573:555): arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=ffffce011f00 a2=0 a3=1 items=0 ppid=2898 pid=6138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:05.573000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:56:05.639894 kernel: audit: type=1327 audit(1742237765.573:555): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:56:05.595000 audit[6138]: NETFILTER_CFG table=nat:127 family=2 entries=106 op=nft_register_chain pid=6138 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:56:05.655281 kernel: audit: type=1325 audit(1742237765.595:556): table=nat:127 family=2 entries=106 op=nft_register_chain pid=6138 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:56:05.655363 kernel: audit: type=1300 audit(1742237765.595:556): arch=c00000b7 syscall=211 success=yes exit=49452 a0=3 a1=ffffce011f00 a2=0 a3=1 items=0 ppid=2898 pid=6138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:05.595000 audit[6138]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=49452 a0=3 a1=ffffce011f00 a2=0 a3=1 items=0 ppid=2898 pid=6138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:05.595000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:56:05.698848 kernel: audit: type=1327 audit(1742237765.595:556): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:56:07.116505 systemd[1]: Started sshd@21-10.200.20.12:22-10.200.16.10:58746.service. Mar 17 18:56:07.115000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.12:22-10.200.16.10:58746 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:56:07.139900 kernel: audit: type=1130 audit(1742237767.115:557): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.12:22-10.200.16.10:58746 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:56:07.527000 audit[6140]: USER_ACCT pid=6140 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:07.528229 sshd[6140]: Accepted publickey for core from 10.200.16.10 port 58746 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:56:07.553000 audit[6140]: CRED_ACQ pid=6140 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:07.555931 sshd[6140]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:56:07.579031 kernel: audit: type=1101 audit(1742237767.527:558): pid=6140 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:07.579297 kernel: audit: type=1103 audit(1742237767.553:559): pid=6140 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:07.594622 kernel: audit: type=1006 audit(1742237767.553:560): pid=6140 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Mar 17 18:56:07.553000 audit[6140]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd6ad7dd0 a2=3 a3=1 items=0 ppid=1 pid=6140 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:07.553000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:56:07.598529 systemd[1]: Started session-24.scope. Mar 17 18:56:07.598748 systemd-logind[1549]: New session 24 of user core. Mar 17 18:56:07.602000 audit[6140]: USER_START pid=6140 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:07.603000 audit[6143]: CRED_ACQ pid=6143 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:07.909594 sshd[6140]: pam_unix(sshd:session): session closed for user core Mar 17 18:56:07.910000 audit[6140]: USER_END pid=6140 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:07.910000 audit[6140]: CRED_DISP pid=6140 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:07.913199 systemd[1]: sshd@21-10.200.20.12:22-10.200.16.10:58746.service: Deactivated successfully. Mar 17 18:56:07.912000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.12:22-10.200.16.10:58746 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:56:07.914030 systemd[1]: session-24.scope: Deactivated successfully. Mar 17 18:56:07.914443 systemd-logind[1549]: Session 24 logged out. Waiting for processes to exit. Mar 17 18:56:07.915131 systemd-logind[1549]: Removed session 24. Mar 17 18:56:09.974386 update_engine[1550]: I0317 18:56:09.973954 1550 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 18:56:09.974386 update_engine[1550]: I0317 18:56:09.974143 1550 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 18:56:09.974386 update_engine[1550]: I0317 18:56:09.974350 1550 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 18:56:10.080188 update_engine[1550]: E0317 18:56:10.080045 1550 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 18:56:10.080188 update_engine[1550]: I0317 18:56:10.080149 1550 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Mar 17 18:56:13.011852 kernel: kauditd_printk_skb: 7 callbacks suppressed Mar 17 18:56:13.012041 kernel: audit: type=1130 audit(1742237772.983:566): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.12:22-10.200.16.10:55642 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:56:12.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.12:22-10.200.16.10:55642 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:56:12.984201 systemd[1]: Started sshd@22-10.200.20.12:22-10.200.16.10:55642.service. Mar 17 18:56:13.416000 audit[6153]: USER_ACCT pid=6153 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:13.417791 sshd[6153]: Accepted publickey for core from 10.200.16.10 port 55642 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:56:13.442924 kernel: audit: type=1101 audit(1742237773.416:567): pid=6153 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:13.442000 audit[6153]: CRED_ACQ pid=6153 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:13.443509 sshd[6153]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:56:13.479619 kernel: audit: type=1103 audit(1742237773.442:568): pid=6153 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:13.479759 kernel: audit: type=1006 audit(1742237773.442:569): pid=6153 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Mar 17 18:56:13.479786 kernel: audit: type=1300 audit(1742237773.442:569): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffc649780 a2=3 a3=1 items=0 ppid=1 pid=6153 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:13.442000 audit[6153]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffc649780 a2=3 a3=1 items=0 ppid=1 pid=6153 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:13.442000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:56:13.508025 systemd-logind[1549]: New session 25 of user core. Mar 17 18:56:13.508812 systemd[1]: Started session-25.scope. Mar 17 18:56:13.512109 kernel: audit: type=1327 audit(1742237773.442:569): proctitle=737368643A20636F7265205B707269765D Mar 17 18:56:13.513000 audit[6153]: USER_START pid=6153 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:13.521000 audit[6156]: CRED_ACQ pid=6156 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:13.564807 kernel: audit: type=1105 audit(1742237773.513:570): pid=6153 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:13.564942 kernel: audit: type=1103 audit(1742237773.521:571): pid=6156 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:13.865724 sshd[6153]: pam_unix(sshd:session): session closed for user core Mar 17 18:56:13.865000 audit[6153]: USER_END pid=6153 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:13.866000 audit[6153]: CRED_DISP pid=6153 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:13.918908 kernel: audit: type=1106 audit(1742237773.865:572): pid=6153 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:13.919077 kernel: audit: type=1104 audit(1742237773.866:573): pid=6153 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:13.894000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.12:22-10.200.16.10:55642 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:56:13.895546 systemd[1]: sshd@22-10.200.20.12:22-10.200.16.10:55642.service: Deactivated successfully. Mar 17 18:56:13.919875 systemd[1]: session-25.scope: Deactivated successfully. Mar 17 18:56:13.920344 systemd-logind[1549]: Session 25 logged out. Waiting for processes to exit. Mar 17 18:56:13.921529 systemd-logind[1549]: Removed session 25. Mar 17 18:56:14.925914 systemd[1]: run-containerd-runc-k8s.io-5b09cf7369d1b40478e8ae3631e59ae69be41e1f9171ab3ce951bbc49e85a076-runc.0o31PJ.mount: Deactivated successfully. Mar 17 18:56:18.929723 systemd[1]: Started sshd@23-10.200.20.12:22-10.200.16.10:41776.service. Mar 17 18:56:18.958709 kernel: kauditd_printk_skb: 1 callbacks suppressed Mar 17 18:56:18.958857 kernel: audit: type=1130 audit(1742237778.929:575): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.12:22-10.200.16.10:41776 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:56:18.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.12:22-10.200.16.10:41776 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:56:19.342000 audit[6190]: USER_ACCT pid=6190 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:19.343434 sshd[6190]: Accepted publickey for core from 10.200.16.10 port 41776 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:56:19.345633 sshd[6190]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:56:19.344000 audit[6190]: CRED_ACQ pid=6190 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:19.393779 kernel: audit: type=1101 audit(1742237779.342:576): pid=6190 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:19.394003 kernel: audit: type=1103 audit(1742237779.344:577): pid=6190 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:19.394059 kernel: audit: type=1006 audit(1742237779.344:578): pid=6190 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Mar 17 18:56:19.344000 audit[6190]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff4acebc0 a2=3 a3=1 items=0 ppid=1 pid=6190 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:19.413813 systemd[1]: Started session-26.scope. Mar 17 18:56:19.414725 systemd-logind[1549]: New session 26 of user core. Mar 17 18:56:19.434953 kernel: audit: type=1300 audit(1742237779.344:578): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff4acebc0 a2=3 a3=1 items=0 ppid=1 pid=6190 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:19.344000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:56:19.444519 kernel: audit: type=1327 audit(1742237779.344:578): proctitle=737368643A20636F7265205B707269765D Mar 17 18:56:19.444675 kernel: audit: type=1105 audit(1742237779.435:579): pid=6190 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:19.435000 audit[6190]: USER_START pid=6190 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:19.437000 audit[6193]: CRED_ACQ pid=6193 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:19.495636 kernel: audit: type=1103 audit(1742237779.437:580): pid=6193 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:19.762840 sshd[6190]: pam_unix(sshd:session): session closed for user core Mar 17 18:56:19.763000 audit[6190]: USER_END pid=6190 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:19.766794 systemd[1]: sshd@23-10.200.20.12:22-10.200.16.10:41776.service: Deactivated successfully. Mar 17 18:56:19.767696 systemd[1]: session-26.scope: Deactivated successfully. Mar 17 18:56:19.763000 audit[6190]: CRED_DISP pid=6190 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:19.818189 kernel: audit: type=1106 audit(1742237779.763:581): pid=6190 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:19.818331 kernel: audit: type=1104 audit(1742237779.763:582): pid=6190 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:19.763000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.12:22-10.200.16.10:41776 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:56:19.819585 systemd-logind[1549]: Session 26 logged out. Waiting for processes to exit. Mar 17 18:56:19.820773 systemd-logind[1549]: Removed session 26. Mar 17 18:56:19.971559 update_engine[1550]: I0317 18:56:19.971144 1550 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 18:56:19.971559 update_engine[1550]: I0317 18:56:19.971331 1550 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 18:56:19.971559 update_engine[1550]: I0317 18:56:19.971521 1550 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 18:56:20.049496 update_engine[1550]: E0317 18:56:20.048512 1550 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 18:56:20.049496 update_engine[1550]: I0317 18:56:20.048609 1550 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 17 18:56:20.049496 update_engine[1550]: I0317 18:56:20.048614 1550 omaha_request_action.cc:621] Omaha request response: Mar 17 18:56:20.049496 update_engine[1550]: E0317 18:56:20.048713 1550 omaha_request_action.cc:640] Omaha request network transfer failed. Mar 17 18:56:20.049496 update_engine[1550]: I0317 18:56:20.048727 1550 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Mar 17 18:56:20.049496 update_engine[1550]: I0317 18:56:20.048730 1550 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 17 18:56:20.049496 update_engine[1550]: I0317 18:56:20.048733 1550 update_attempter.cc:306] Processing Done. Mar 17 18:56:20.049496 update_engine[1550]: E0317 18:56:20.048746 1550 update_attempter.cc:619] Update failed. Mar 17 18:56:20.049496 update_engine[1550]: I0317 18:56:20.048749 1550 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Mar 17 18:56:20.049496 update_engine[1550]: I0317 18:56:20.048753 1550 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Mar 17 18:56:20.049496 update_engine[1550]: I0317 18:56:20.048756 1550 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Mar 17 18:56:20.049496 update_engine[1550]: I0317 18:56:20.048831 1550 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 17 18:56:20.049496 update_engine[1550]: I0317 18:56:20.048853 1550 omaha_request_action.cc:270] Posting an Omaha request to disabled Mar 17 18:56:20.049496 update_engine[1550]: I0317 18:56:20.048856 1550 omaha_request_action.cc:271] Request: Mar 17 18:56:20.049496 update_engine[1550]: Mar 17 18:56:20.049496 update_engine[1550]: Mar 17 18:56:20.049496 update_engine[1550]: Mar 17 18:56:20.049960 locksmithd[1644]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Mar 17 18:56:20.050195 update_engine[1550]: Mar 17 18:56:20.050195 update_engine[1550]: Mar 17 18:56:20.050195 update_engine[1550]: Mar 17 18:56:20.050195 update_engine[1550]: I0317 18:56:20.048887 1550 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 18:56:20.050195 update_engine[1550]: I0317 18:56:20.049031 1550 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 18:56:20.050195 update_engine[1550]: I0317 18:56:20.049204 1550 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 18:56:20.062940 update_engine[1550]: E0317 18:56:20.062906 1550 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 18:56:20.063043 update_engine[1550]: I0317 18:56:20.063001 1550 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 17 18:56:20.063043 update_engine[1550]: I0317 18:56:20.063006 1550 omaha_request_action.cc:621] Omaha request response: Mar 17 18:56:20.063043 update_engine[1550]: I0317 18:56:20.063011 1550 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 17 18:56:20.063043 update_engine[1550]: I0317 18:56:20.063014 1550 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 17 18:56:20.063043 update_engine[1550]: I0317 18:56:20.063017 1550 update_attempter.cc:306] Processing Done. Mar 17 18:56:20.063043 update_engine[1550]: I0317 18:56:20.063032 1550 update_attempter.cc:310] Error event sent. Mar 17 18:56:20.063043 update_engine[1550]: I0317 18:56:20.063042 1550 update_check_scheduler.cc:74] Next update check in 48m50s Mar 17 18:56:20.063396 locksmithd[1644]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Mar 17 18:56:24.864646 kernel: kauditd_printk_skb: 1 callbacks suppressed Mar 17 18:56:24.864874 kernel: audit: type=1130 audit(1742237784.837:584): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.20.12:22-10.200.16.10:41788 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:56:24.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.20.12:22-10.200.16.10:41788 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:56:24.837963 systemd[1]: Started sshd@24-10.200.20.12:22-10.200.16.10:41788.service. Mar 17 18:56:25.292000 audit[6203]: USER_ACCT pid=6203 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:25.296077 sshd[6203]: Accepted publickey for core from 10.200.16.10 port 41788 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:56:25.299746 sshd[6203]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:56:25.298000 audit[6203]: CRED_ACQ pid=6203 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:25.342559 kernel: audit: type=1101 audit(1742237785.292:585): pid=6203 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:25.342712 kernel: audit: type=1103 audit(1742237785.298:586): pid=6203 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:25.357197 kernel: audit: type=1006 audit(1742237785.298:587): pid=6203 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Mar 17 18:56:25.298000 audit[6203]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffffead550 a2=3 a3=1 items=0 ppid=1 pid=6203 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:25.382002 kernel: audit: type=1300 audit(1742237785.298:587): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffffead550 a2=3 a3=1 items=0 ppid=1 pid=6203 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:25.298000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:56:25.390586 kernel: audit: type=1327 audit(1742237785.298:587): proctitle=737368643A20636F7265205B707269765D Mar 17 18:56:25.393672 systemd[1]: Started session-27.scope. Mar 17 18:56:25.394205 systemd-logind[1549]: New session 27 of user core. Mar 17 18:56:25.397000 audit[6203]: USER_START pid=6203 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:25.425000 audit[6206]: CRED_ACQ pid=6206 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:25.448143 kernel: audit: type=1105 audit(1742237785.397:588): pid=6203 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:25.448274 kernel: audit: type=1103 audit(1742237785.425:589): pid=6206 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:25.765661 sshd[6203]: pam_unix(sshd:session): session closed for user core Mar 17 18:56:25.766000 audit[6203]: USER_END pid=6203 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:25.769693 systemd[1]: sshd@24-10.200.20.12:22-10.200.16.10:41788.service: Deactivated successfully. Mar 17 18:56:25.770758 systemd[1]: session-27.scope: Deactivated successfully. Mar 17 18:56:25.767000 audit[6203]: CRED_DISP pid=6203 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:25.796323 kernel: audit: type=1106 audit(1742237785.766:590): pid=6203 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:25.796046 systemd-logind[1549]: Session 27 logged out. Waiting for processes to exit. Mar 17 18:56:25.821024 systemd-logind[1549]: Removed session 27. Mar 17 18:56:25.769000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.20.12:22-10.200.16.10:41788 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:56:25.821892 kernel: audit: type=1104 audit(1742237785.767:591): pid=6203 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:30.349522 systemd[1]: run-containerd-runc-k8s.io-c095753d191fcbffbc8b75aece1c7488e9f3213681409fbc4369c81a4d9ec146-runc.tb7sOJ.mount: Deactivated successfully. Mar 17 18:56:30.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.20.12:22-10.200.16.10:42310 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:56:30.843650 systemd[1]: Started sshd@25-10.200.20.12:22-10.200.16.10:42310.service. Mar 17 18:56:30.848851 kernel: kauditd_printk_skb: 1 callbacks suppressed Mar 17 18:56:30.848973 kernel: audit: type=1130 audit(1742237790.842:593): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.20.12:22-10.200.16.10:42310 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:56:31.306000 audit[6237]: USER_ACCT pid=6237 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:31.311110 sshd[6237]: Accepted publickey for core from 10.200.16.10 port 42310 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:56:31.313492 sshd[6237]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:56:31.312000 audit[6237]: CRED_ACQ pid=6237 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:31.338735 systemd[1]: Started session-28.scope. Mar 17 18:56:31.339900 systemd-logind[1549]: New session 28 of user core. Mar 17 18:56:31.358315 kernel: audit: type=1101 audit(1742237791.306:594): pid=6237 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:31.358435 kernel: audit: type=1103 audit(1742237791.312:595): pid=6237 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:31.374895 kernel: audit: type=1006 audit(1742237791.312:596): pid=6237 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Mar 17 18:56:31.312000 audit[6237]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcef0abd0 a2=3 a3=1 items=0 ppid=1 pid=6237 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:31.400044 kernel: audit: type=1300 audit(1742237791.312:596): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcef0abd0 a2=3 a3=1 items=0 ppid=1 pid=6237 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:31.312000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:56:31.408601 kernel: audit: type=1327 audit(1742237791.312:596): proctitle=737368643A20636F7265205B707269765D Mar 17 18:56:31.345000 audit[6237]: USER_START pid=6237 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:31.436848 kernel: audit: type=1105 audit(1742237791.345:597): pid=6237 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:31.437014 kernel: audit: type=1103 audit(1742237791.351:598): pid=6240 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:31.351000 audit[6240]: CRED_ACQ pid=6240 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:31.716956 sshd[6237]: pam_unix(sshd:session): session closed for user core Mar 17 18:56:31.718000 audit[6237]: USER_END pid=6237 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:31.720611 systemd[1]: sshd@25-10.200.20.12:22-10.200.16.10:42310.service: Deactivated successfully. Mar 17 18:56:31.721507 systemd[1]: session-28.scope: Deactivated successfully. Mar 17 18:56:31.723283 systemd-logind[1549]: Session 28 logged out. Waiting for processes to exit. Mar 17 18:56:31.724223 systemd-logind[1549]: Removed session 28. Mar 17 18:56:31.718000 audit[6237]: CRED_DISP pid=6237 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:31.749212 systemd[1]: Started sshd@26-10.200.20.12:22-47.108.74.203:60076.service. Mar 17 18:56:31.773271 kernel: audit: type=1106 audit(1742237791.718:599): pid=6237 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:31.774092 kernel: audit: type=1104 audit(1742237791.718:600): pid=6237 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:31.720000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.20.12:22-10.200.16.10:42310 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:56:31.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.200.20.12:22-47.108.74.203:60076 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:56:35.248082 systemd[1]: run-containerd-runc-k8s.io-5b09cf7369d1b40478e8ae3631e59ae69be41e1f9171ab3ce951bbc49e85a076-runc.1dOoqC.mount: Deactivated successfully. Mar 17 18:56:36.817348 kernel: kauditd_printk_skb: 2 callbacks suppressed Mar 17 18:56:36.817482 kernel: audit: type=1130 audit(1742237796.786:603): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.200.20.12:22-10.200.16.10:42314 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:56:36.786000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.200.20.12:22-10.200.16.10:42314 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:56:36.787070 systemd[1]: Started sshd@27-10.200.20.12:22-10.200.16.10:42314.service. Mar 17 18:56:37.200000 audit[6274]: USER_ACCT pid=6274 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:37.201983 sshd[6274]: Accepted publickey for core from 10.200.16.10 port 42314 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:56:37.203800 sshd[6274]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:56:37.202000 audit[6274]: CRED_ACQ pid=6274 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:37.250830 kernel: audit: type=1101 audit(1742237797.200:604): pid=6274 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:37.251109 kernel: audit: type=1103 audit(1742237797.202:605): pid=6274 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:37.266790 kernel: audit: type=1006 audit(1742237797.202:606): pid=6274 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=29 res=1 Mar 17 18:56:37.267213 kernel: audit: type=1300 audit(1742237797.202:606): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffb2aa4a0 a2=3 a3=1 items=0 ppid=1 pid=6274 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:37.202000 audit[6274]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffb2aa4a0 a2=3 a3=1 items=0 ppid=1 pid=6274 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:37.202000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:56:37.302907 kernel: audit: type=1327 audit(1742237797.202:606): proctitle=737368643A20636F7265205B707269765D Mar 17 18:56:37.305776 systemd-logind[1549]: New session 29 of user core. Mar 17 18:56:37.306280 systemd[1]: Started session-29.scope. Mar 17 18:56:37.310000 audit[6274]: USER_START pid=6274 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:37.312000 audit[6277]: CRED_ACQ pid=6277 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:37.363062 kernel: audit: type=1105 audit(1742237797.310:607): pid=6274 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:37.363199 kernel: audit: type=1103 audit(1742237797.312:608): pid=6277 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:37.605334 sshd[6274]: pam_unix(sshd:session): session closed for user core Mar 17 18:56:37.605000 audit[6274]: USER_END pid=6274 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:37.608918 systemd[1]: sshd@27-10.200.20.12:22-10.200.16.10:42314.service: Deactivated successfully. Mar 17 18:56:37.609901 systemd[1]: session-29.scope: Deactivated successfully. Mar 17 18:56:37.606000 audit[6274]: CRED_DISP pid=6274 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:37.635399 systemd-logind[1549]: Session 29 logged out. Waiting for processes to exit. Mar 17 18:56:37.657458 kernel: audit: type=1106 audit(1742237797.605:609): pid=6274 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:37.657599 kernel: audit: type=1104 audit(1742237797.606:610): pid=6274 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Mar 17 18:56:37.608000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.200.20.12:22-10.200.16.10:42314 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:56:37.658268 systemd-logind[1549]: Removed session 29. Mar 17 18:56:37.801985 sshd[6252]: kex_exchange_identification: banner line contains invalid characters Mar 17 18:56:37.802443 sshd[6252]: banner exchange: Connection from 47.108.74.203 port 60076: invalid format Mar 17 18:56:37.803245 systemd[1]: sshd@26-10.200.20.12:22-47.108.74.203:60076.service: Deactivated successfully. Mar 17 18:56:37.802000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.200.20.12:22-47.108.74.203:60076 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'