Dec 13 14:05:23.061093 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 13 14:05:23.061112 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Dec 13 12:58:58 -00 2024 Dec 13 14:05:23.061120 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Dec 13 14:05:23.061127 kernel: printk: bootconsole [pl11] enabled Dec 13 14:05:23.061132 kernel: efi: EFI v2.70 by EDK II Dec 13 14:05:23.061138 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3763cf98 Dec 13 14:05:23.061144 kernel: random: crng init done Dec 13 14:05:23.061150 kernel: ACPI: Early table checksum verification disabled Dec 13 14:05:23.061155 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Dec 13 14:05:23.061160 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:05:23.061166 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:05:23.061171 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Dec 13 14:05:23.061178 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:05:23.061183 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:05:23.061190 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:05:23.061196 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:05:23.061202 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:05:23.061210 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:05:23.061216 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Dec 13 14:05:23.061221 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:05:23.061227 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Dec 13 14:05:23.061233 kernel: NUMA: Failed to initialise from firmware Dec 13 14:05:23.061238 kernel: NUMA: Faking a node at [mem 0x0000000000000000-0x00000001bfffffff] Dec 13 14:05:23.061244 kernel: NUMA: NODE_DATA [mem 0x1bf7f1900-0x1bf7f6fff] Dec 13 14:05:23.061250 kernel: Zone ranges: Dec 13 14:05:23.061255 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Dec 13 14:05:23.061261 kernel: DMA32 empty Dec 13 14:05:23.061267 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Dec 13 14:05:23.061273 kernel: Movable zone start for each node Dec 13 14:05:23.061279 kernel: Early memory node ranges Dec 13 14:05:23.061284 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Dec 13 14:05:23.061290 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Dec 13 14:05:23.061296 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Dec 13 14:05:23.061302 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Dec 13 14:05:23.061307 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Dec 13 14:05:23.061313 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Dec 13 14:05:23.061342 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Dec 13 14:05:23.061351 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Dec 13 14:05:23.061356 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Dec 13 14:05:23.061362 kernel: psci: probing for conduit method from ACPI. Dec 13 14:05:23.061373 kernel: psci: PSCIv1.1 detected in firmware. Dec 13 14:05:23.061379 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 14:05:23.061385 kernel: psci: MIGRATE_INFO_TYPE not supported. Dec 13 14:05:23.061391 kernel: psci: SMC Calling Convention v1.4 Dec 13 14:05:23.061397 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node -1 Dec 13 14:05:23.061404 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node -1 Dec 13 14:05:23.061410 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 Dec 13 14:05:23.061416 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 Dec 13 14:05:23.061422 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 13 14:05:23.061428 kernel: Detected PIPT I-cache on CPU0 Dec 13 14:05:23.061435 kernel: CPU features: detected: GIC system register CPU interface Dec 13 14:05:23.061441 kernel: CPU features: detected: Hardware dirty bit management Dec 13 14:05:23.061447 kernel: CPU features: detected: Spectre-BHB Dec 13 14:05:23.061453 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 13 14:05:23.061459 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 13 14:05:23.061465 kernel: CPU features: detected: ARM erratum 1418040 Dec 13 14:05:23.061472 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Dec 13 14:05:23.061478 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 13 14:05:23.061484 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Dec 13 14:05:23.061490 kernel: Policy zone: Normal Dec 13 14:05:23.061498 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=5997a8cf94b1df1856dc785f0a7074604bbf4c21fdcca24a1996021471a77601 Dec 13 14:05:23.061505 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 14:05:23.061511 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 14:05:23.061517 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 14:05:23.061523 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 14:05:23.061530 kernel: software IO TLB: mapped [mem 0x000000003a550000-0x000000003e550000] (64MB) Dec 13 14:05:23.061536 kernel: Memory: 3986936K/4194160K available (9792K kernel code, 2092K rwdata, 7576K rodata, 36416K init, 777K bss, 207224K reserved, 0K cma-reserved) Dec 13 14:05:23.061543 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 14:05:23.061550 kernel: trace event string verifier disabled Dec 13 14:05:23.061556 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 14:05:23.061562 kernel: rcu: RCU event tracing is enabled. Dec 13 14:05:23.061568 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 14:05:23.061575 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 14:05:23.061581 kernel: Tracing variant of Tasks RCU enabled. Dec 13 14:05:23.061587 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 14:05:23.061593 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 14:05:23.061599 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 14:05:23.061605 kernel: GICv3: 960 SPIs implemented Dec 13 14:05:23.061612 kernel: GICv3: 0 Extended SPIs implemented Dec 13 14:05:23.061618 kernel: GICv3: Distributor has no Range Selector support Dec 13 14:05:23.061624 kernel: Root IRQ handler: gic_handle_irq Dec 13 14:05:23.061630 kernel: GICv3: 16 PPIs implemented Dec 13 14:05:23.061636 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Dec 13 14:05:23.061642 kernel: ITS: No ITS available, not enabling LPIs Dec 13 14:05:23.061649 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:05:23.061655 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 13 14:05:23.061661 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 13 14:05:23.061667 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 13 14:05:23.061674 kernel: Console: colour dummy device 80x25 Dec 13 14:05:23.061681 kernel: printk: console [tty1] enabled Dec 13 14:05:23.061688 kernel: ACPI: Core revision 20210730 Dec 13 14:05:23.061694 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 13 14:05:23.061701 kernel: pid_max: default: 32768 minimum: 301 Dec 13 14:05:23.061707 kernel: LSM: Security Framework initializing Dec 13 14:05:23.061713 kernel: SELinux: Initializing. Dec 13 14:05:23.061719 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 14:05:23.061726 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 14:05:23.061732 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Dec 13 14:05:23.061740 kernel: Hyper-V: Host Build 10.0.22477.1594-1-0 Dec 13 14:05:23.061746 kernel: rcu: Hierarchical SRCU implementation. Dec 13 14:05:23.061752 kernel: Remapping and enabling EFI services. Dec 13 14:05:23.061758 kernel: smp: Bringing up secondary CPUs ... Dec 13 14:05:23.061765 kernel: Detected PIPT I-cache on CPU1 Dec 13 14:05:23.061771 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Dec 13 14:05:23.061778 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:05:23.061784 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 13 14:05:23.061790 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 14:05:23.061796 kernel: SMP: Total of 2 processors activated. Dec 13 14:05:23.061804 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 14:05:23.061810 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Dec 13 14:05:23.061817 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 13 14:05:23.061823 kernel: CPU features: detected: CRC32 instructions Dec 13 14:05:23.061829 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 13 14:05:23.061835 kernel: CPU features: detected: LSE atomic instructions Dec 13 14:05:23.061842 kernel: CPU features: detected: Privileged Access Never Dec 13 14:05:23.061848 kernel: CPU: All CPU(s) started at EL1 Dec 13 14:05:23.061854 kernel: alternatives: patching kernel code Dec 13 14:05:23.061862 kernel: devtmpfs: initialized Dec 13 14:05:23.061872 kernel: KASLR enabled Dec 13 14:05:23.061878 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 14:05:23.061887 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 14:05:23.061893 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 14:05:23.061899 kernel: SMBIOS 3.1.0 present. Dec 13 14:05:23.061906 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Dec 13 14:05:23.061913 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 14:05:23.061920 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 14:05:23.061928 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 14:05:23.061934 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 14:05:23.061941 kernel: audit: initializing netlink subsys (disabled) Dec 13 14:05:23.061948 kernel: audit: type=2000 audit(0.091:1): state=initialized audit_enabled=0 res=1 Dec 13 14:05:23.061954 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 14:05:23.061961 kernel: cpuidle: using governor menu Dec 13 14:05:23.061967 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 14:05:23.061975 kernel: ASID allocator initialised with 32768 entries Dec 13 14:05:23.061981 kernel: ACPI: bus type PCI registered Dec 13 14:05:23.061988 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 14:05:23.061995 kernel: Serial: AMBA PL011 UART driver Dec 13 14:05:23.062001 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 14:05:23.062008 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 14:05:23.062015 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 14:05:23.062022 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 14:05:23.062028 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 14:05:23.062036 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 14:05:23.062043 kernel: ACPI: Added _OSI(Module Device) Dec 13 14:05:23.062049 kernel: ACPI: Added _OSI(Processor Device) Dec 13 14:05:23.062056 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 14:05:23.062062 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 14:05:23.062069 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 14:05:23.062076 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 14:05:23.062082 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 14:05:23.062089 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 14:05:23.062097 kernel: ACPI: Interpreter enabled Dec 13 14:05:23.062103 kernel: ACPI: Using GIC for interrupt routing Dec 13 14:05:23.062110 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Dec 13 14:05:23.062117 kernel: printk: console [ttyAMA0] enabled Dec 13 14:05:23.062123 kernel: printk: bootconsole [pl11] disabled Dec 13 14:05:23.062130 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Dec 13 14:05:23.062136 kernel: iommu: Default domain type: Translated Dec 13 14:05:23.062143 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 14:05:23.062149 kernel: vgaarb: loaded Dec 13 14:05:23.062156 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 14:05:23.062164 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 14:05:23.062170 kernel: PTP clock support registered Dec 13 14:05:23.062177 kernel: Registered efivars operations Dec 13 14:05:23.062184 kernel: No ACPI PMU IRQ for CPU0 Dec 13 14:05:23.062190 kernel: No ACPI PMU IRQ for CPU1 Dec 13 14:05:23.062196 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 14:05:23.062203 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 14:05:23.062209 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 14:05:23.062217 kernel: pnp: PnP ACPI init Dec 13 14:05:23.062223 kernel: pnp: PnP ACPI: found 0 devices Dec 13 14:05:23.062230 kernel: NET: Registered PF_INET protocol family Dec 13 14:05:23.062236 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 14:05:23.062243 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 14:05:23.062250 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 14:05:23.062256 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 14:05:23.062263 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Dec 13 14:05:23.062270 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 14:05:23.062278 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 14:05:23.062284 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 14:05:23.062291 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 14:05:23.062297 kernel: PCI: CLS 0 bytes, default 64 Dec 13 14:05:23.062304 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Dec 13 14:05:23.062311 kernel: kvm [1]: HYP mode not available Dec 13 14:05:23.062317 kernel: Initialise system trusted keyrings Dec 13 14:05:23.069620 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 14:05:23.069635 kernel: Key type asymmetric registered Dec 13 14:05:23.069647 kernel: Asymmetric key parser 'x509' registered Dec 13 14:05:23.069654 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 14:05:23.069661 kernel: io scheduler mq-deadline registered Dec 13 14:05:23.069668 kernel: io scheduler kyber registered Dec 13 14:05:23.069675 kernel: io scheduler bfq registered Dec 13 14:05:23.069681 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 14:05:23.069688 kernel: thunder_xcv, ver 1.0 Dec 13 14:05:23.069695 kernel: thunder_bgx, ver 1.0 Dec 13 14:05:23.069702 kernel: nicpf, ver 1.0 Dec 13 14:05:23.069708 kernel: nicvf, ver 1.0 Dec 13 14:05:23.069838 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 14:05:23.069900 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T14:05:22 UTC (1734098722) Dec 13 14:05:23.069909 kernel: efifb: probing for efifb Dec 13 14:05:23.069916 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Dec 13 14:05:23.069923 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Dec 13 14:05:23.069929 kernel: efifb: scrolling: redraw Dec 13 14:05:23.069936 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 13 14:05:23.069945 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 14:05:23.069951 kernel: fb0: EFI VGA frame buffer device Dec 13 14:05:23.069958 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Dec 13 14:05:23.069965 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 14:05:23.069972 kernel: NET: Registered PF_INET6 protocol family Dec 13 14:05:23.069978 kernel: Segment Routing with IPv6 Dec 13 14:05:23.069985 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 14:05:23.069992 kernel: NET: Registered PF_PACKET protocol family Dec 13 14:05:23.069998 kernel: Key type dns_resolver registered Dec 13 14:05:23.070005 kernel: registered taskstats version 1 Dec 13 14:05:23.070013 kernel: Loading compiled-in X.509 certificates Dec 13 14:05:23.070020 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: e011ba9949ade5a6d03f7a5e28171f7f59e70f8a' Dec 13 14:05:23.070026 kernel: Key type .fscrypt registered Dec 13 14:05:23.070033 kernel: Key type fscrypt-provisioning registered Dec 13 14:05:23.070039 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 14:05:23.070046 kernel: ima: Allocated hash algorithm: sha1 Dec 13 14:05:23.070053 kernel: ima: No architecture policies found Dec 13 14:05:23.070059 kernel: clk: Disabling unused clocks Dec 13 14:05:23.070067 kernel: Freeing unused kernel memory: 36416K Dec 13 14:05:23.070074 kernel: Run /init as init process Dec 13 14:05:23.070081 kernel: with arguments: Dec 13 14:05:23.070087 kernel: /init Dec 13 14:05:23.070094 kernel: with environment: Dec 13 14:05:23.070100 kernel: HOME=/ Dec 13 14:05:23.070107 kernel: TERM=linux Dec 13 14:05:23.070113 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 14:05:23.070122 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:05:23.070133 systemd[1]: Detected virtualization microsoft. Dec 13 14:05:23.070141 systemd[1]: Detected architecture arm64. Dec 13 14:05:23.070147 systemd[1]: Running in initrd. Dec 13 14:05:23.070154 systemd[1]: No hostname configured, using default hostname. Dec 13 14:05:23.070162 systemd[1]: Hostname set to . Dec 13 14:05:23.070169 systemd[1]: Initializing machine ID from random generator. Dec 13 14:05:23.070176 systemd[1]: Queued start job for default target initrd.target. Dec 13 14:05:23.070184 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:05:23.070192 systemd[1]: Reached target cryptsetup.target. Dec 13 14:05:23.070199 systemd[1]: Reached target paths.target. Dec 13 14:05:23.070205 systemd[1]: Reached target slices.target. Dec 13 14:05:23.070212 systemd[1]: Reached target swap.target. Dec 13 14:05:23.070219 systemd[1]: Reached target timers.target. Dec 13 14:05:23.070234 systemd[1]: Listening on iscsid.socket. Dec 13 14:05:23.070241 systemd[1]: Listening on iscsiuio.socket. Dec 13 14:05:23.070251 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 14:05:23.070258 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 14:05:23.070265 systemd[1]: Listening on systemd-journald.socket. Dec 13 14:05:23.070272 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:05:23.070280 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:05:23.070287 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:05:23.070294 systemd[1]: Reached target sockets.target. Dec 13 14:05:23.070301 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:05:23.070309 systemd[1]: Finished network-cleanup.service. Dec 13 14:05:23.070317 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 14:05:23.070338 systemd[1]: Starting systemd-journald.service... Dec 13 14:05:23.070345 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:05:23.070353 systemd[1]: Starting systemd-resolved.service... Dec 13 14:05:23.070360 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 14:05:23.070371 systemd-journald[276]: Journal started Dec 13 14:05:23.070417 systemd-journald[276]: Runtime Journal (/run/log/journal/9390744fed3941a196ff8bf54b110292) is 8.0M, max 78.5M, 70.5M free. Dec 13 14:05:23.047359 systemd-modules-load[277]: Inserted module 'overlay' Dec 13 14:05:23.100006 systemd-resolved[278]: Positive Trust Anchors: Dec 13 14:05:23.116667 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 14:05:23.116688 systemd[1]: Started systemd-journald.service. Dec 13 14:05:23.116700 kernel: Bridge firewalling registered Dec 13 14:05:23.100024 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:05:23.151415 kernel: audit: type=1130 audit(1734098723.129:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:23.151440 kernel: SCSI subsystem initialized Dec 13 14:05:23.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:23.100054 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:05:23.209046 kernel: audit: type=1130 audit(1734098723.156:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:23.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:23.102161 systemd-resolved[278]: Defaulting to hostname 'linux'. Dec 13 14:05:23.245554 kernel: audit: type=1130 audit(1734098723.213:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:23.245574 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 14:05:23.213000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:23.116877 systemd-modules-load[277]: Inserted module 'br_netfilter' Dec 13 14:05:23.279342 kernel: audit: type=1130 audit(1734098723.251:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:23.279365 kernel: device-mapper: uevent: version 1.0.3 Dec 13 14:05:23.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:23.130053 systemd[1]: Started systemd-resolved.service. Dec 13 14:05:23.311410 kernel: audit: type=1130 audit(1734098723.284:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:23.311448 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 14:05:23.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:23.190890 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:05:23.214101 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 14:05:23.251710 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 14:05:23.284873 systemd[1]: Reached target nss-lookup.target. Dec 13 14:05:23.320185 systemd-modules-load[277]: Inserted module 'dm_multipath' Dec 13 14:05:23.407957 kernel: audit: type=1130 audit(1734098723.363:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:23.407986 kernel: audit: type=1130 audit(1734098723.385:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:23.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:23.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:23.326964 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 14:05:23.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:23.332837 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:05:23.445016 kernel: audit: type=1130 audit(1734098723.415:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:23.338225 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:05:23.364533 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 14:05:23.388501 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:05:23.444486 systemd[1]: Starting dracut-cmdline.service... Dec 13 14:05:23.461825 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:05:23.479809 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:05:23.489044 dracut-cmdline[296]: dracut-dracut-053 Dec 13 14:05:23.514535 kernel: audit: type=1130 audit(1734098723.493:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:23.493000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:23.514637 dracut-cmdline[296]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=5997a8cf94b1df1856dc785f0a7074604bbf4c21fdcca24a1996021471a77601 Dec 13 14:05:23.586353 kernel: Loading iSCSI transport class v2.0-870. Dec 13 14:05:23.602365 kernel: iscsi: registered transport (tcp) Dec 13 14:05:23.625235 kernel: iscsi: registered transport (qla4xxx) Dec 13 14:05:23.625291 kernel: QLogic iSCSI HBA Driver Dec 13 14:05:23.662541 systemd[1]: Finished dracut-cmdline.service. Dec 13 14:05:23.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:23.668391 systemd[1]: Starting dracut-pre-udev.service... Dec 13 14:05:23.724342 kernel: raid6: neonx8 gen() 13788 MB/s Dec 13 14:05:23.745331 kernel: raid6: neonx8 xor() 10809 MB/s Dec 13 14:05:23.766333 kernel: raid6: neonx4 gen() 13568 MB/s Dec 13 14:05:23.788330 kernel: raid6: neonx4 xor() 11272 MB/s Dec 13 14:05:23.809333 kernel: raid6: neonx2 gen() 12997 MB/s Dec 13 14:05:23.830332 kernel: raid6: neonx2 xor() 10442 MB/s Dec 13 14:05:23.852330 kernel: raid6: neonx1 gen() 10567 MB/s Dec 13 14:05:23.873329 kernel: raid6: neonx1 xor() 8781 MB/s Dec 13 14:05:23.894332 kernel: raid6: int64x8 gen() 6278 MB/s Dec 13 14:05:23.916330 kernel: raid6: int64x8 xor() 3536 MB/s Dec 13 14:05:23.937336 kernel: raid6: int64x4 gen() 7242 MB/s Dec 13 14:05:23.958332 kernel: raid6: int64x4 xor() 3860 MB/s Dec 13 14:05:23.980330 kernel: raid6: int64x2 gen() 6155 MB/s Dec 13 14:05:24.001329 kernel: raid6: int64x2 xor() 3317 MB/s Dec 13 14:05:24.022332 kernel: raid6: int64x1 gen() 5043 MB/s Dec 13 14:05:24.048683 kernel: raid6: int64x1 xor() 2648 MB/s Dec 13 14:05:24.048693 kernel: raid6: using algorithm neonx8 gen() 13788 MB/s Dec 13 14:05:24.048701 kernel: raid6: .... xor() 10809 MB/s, rmw enabled Dec 13 14:05:24.053381 kernel: raid6: using neon recovery algorithm Dec 13 14:05:24.071333 kernel: xor: measuring software checksum speed Dec 13 14:05:24.071345 kernel: 8regs : 17217 MB/sec Dec 13 14:05:24.079838 kernel: 32regs : 20655 MB/sec Dec 13 14:05:24.083946 kernel: arm64_neon : 27141 MB/sec Dec 13 14:05:24.083966 kernel: xor: using function: arm64_neon (27141 MB/sec) Dec 13 14:05:24.146356 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Dec 13 14:05:24.156029 systemd[1]: Finished dracut-pre-udev.service. Dec 13 14:05:24.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:24.165000 audit: BPF prog-id=7 op=LOAD Dec 13 14:05:24.165000 audit: BPF prog-id=8 op=LOAD Dec 13 14:05:24.166430 systemd[1]: Starting systemd-udevd.service... Dec 13 14:05:24.185412 systemd-udevd[476]: Using default interface naming scheme 'v252'. Dec 13 14:05:24.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:24.191830 systemd[1]: Started systemd-udevd.service. Dec 13 14:05:24.203419 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 14:05:24.220462 dracut-pre-trigger[497]: rd.md=0: removing MD RAID activation Dec 13 14:05:24.248306 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 14:05:24.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:24.254492 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:05:24.290567 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:05:24.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:24.341340 kernel: hv_vmbus: Vmbus version:5.3 Dec 13 14:05:24.358880 kernel: hv_vmbus: registering driver hid_hyperv Dec 13 14:05:24.358940 kernel: hv_vmbus: registering driver hv_netvsc Dec 13 14:05:24.358950 kernel: hv_vmbus: registering driver hyperv_keyboard Dec 13 14:05:24.367440 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Dec 13 14:05:24.377521 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Dec 13 14:05:24.377710 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Dec 13 14:05:24.397337 kernel: hv_vmbus: registering driver hv_storvsc Dec 13 14:05:24.402148 kernel: scsi host0: storvsc_host_t Dec 13 14:05:24.402221 kernel: scsi host1: storvsc_host_t Dec 13 14:05:24.413309 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Dec 13 14:05:24.421219 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Dec 13 14:05:24.439834 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Dec 13 14:05:24.440679 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 14:05:24.440704 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Dec 13 14:05:24.462977 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Dec 13 14:05:24.505235 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Dec 13 14:05:24.505402 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 13 14:05:24.505485 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Dec 13 14:05:24.505561 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Dec 13 14:05:24.505640 kernel: hv_netvsc 000d3ac2-b49d-000d-3ac2-b49d000d3ac2 eth0: VF slot 1 added Dec 13 14:05:24.505722 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:05:24.505732 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 13 14:05:24.522337 kernel: hv_vmbus: registering driver hv_pci Dec 13 14:05:24.522391 kernel: hv_pci c36712ab-b322-490b-bae1-4c19d626dbcb: PCI VMBus probing: Using version 0x10004 Dec 13 14:05:24.644490 kernel: hv_pci c36712ab-b322-490b-bae1-4c19d626dbcb: PCI host bridge to bus b322:00 Dec 13 14:05:24.644584 kernel: pci_bus b322:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Dec 13 14:05:24.644679 kernel: pci_bus b322:00: No busn resource found for root bus, will use [bus 00-ff] Dec 13 14:05:24.644750 kernel: pci b322:00:02.0: [15b3:1018] type 00 class 0x020000 Dec 13 14:05:24.644842 kernel: pci b322:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Dec 13 14:05:24.644935 kernel: pci b322:00:02.0: enabling Extended Tags Dec 13 14:05:24.645014 kernel: pci b322:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at b322:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Dec 13 14:05:24.645090 kernel: pci_bus b322:00: busn_res: [bus 00-ff] end is updated to 00 Dec 13 14:05:24.645162 kernel: pci b322:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Dec 13 14:05:24.682336 kernel: mlx5_core b322:00:02.0: firmware version: 16.30.1284 Dec 13 14:05:24.919450 kernel: mlx5_core b322:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0) Dec 13 14:05:24.919561 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (543) Dec 13 14:05:24.919571 kernel: hv_netvsc 000d3ac2-b49d-000d-3ac2-b49d000d3ac2 eth0: VF registering: eth1 Dec 13 14:05:24.919652 kernel: mlx5_core b322:00:02.0 eth1: joined to eth0 Dec 13 14:05:24.774483 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 14:05:24.911655 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:05:24.943193 kernel: mlx5_core b322:00:02.0 enP45858s1: renamed from eth1 Dec 13 14:05:25.093749 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 14:05:25.177760 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 14:05:25.184902 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 14:05:25.200745 systemd[1]: Starting disk-uuid.service... Dec 13 14:05:25.224352 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:05:26.238343 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:05:26.238430 disk-uuid[604]: The operation has completed successfully. Dec 13 14:05:26.297806 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 14:05:26.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:26.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:26.297895 systemd[1]: Finished disk-uuid.service. Dec 13 14:05:26.303352 systemd[1]: Starting verity-setup.service... Dec 13 14:05:26.361361 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 14:05:26.709447 systemd[1]: Found device dev-mapper-usr.device. Dec 13 14:05:26.715434 systemd[1]: Mounting sysusr-usr.mount... Dec 13 14:05:26.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:26.723605 systemd[1]: Finished verity-setup.service. Dec 13 14:05:26.778182 systemd[1]: Mounted sysusr-usr.mount. Dec 13 14:05:26.786701 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 14:05:26.782889 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 14:05:26.783687 systemd[1]: Starting ignition-setup.service... Dec 13 14:05:26.791875 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 14:05:26.832078 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 14:05:26.832139 kernel: BTRFS info (device sda6): using free space tree Dec 13 14:05:26.837138 kernel: BTRFS info (device sda6): has skinny extents Dec 13 14:05:26.878361 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 14:05:26.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:26.888000 audit: BPF prog-id=9 op=LOAD Dec 13 14:05:26.889035 systemd[1]: Starting systemd-networkd.service... Dec 13 14:05:26.911210 systemd-networkd[868]: lo: Link UP Dec 13 14:05:26.911222 systemd-networkd[868]: lo: Gained carrier Dec 13 14:05:26.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:26.912003 systemd-networkd[868]: Enumeration completed Dec 13 14:05:26.915144 systemd[1]: Started systemd-networkd.service. Dec 13 14:05:26.915916 systemd-networkd[868]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:05:26.920557 systemd[1]: Reached target network.target. Dec 13 14:05:26.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:26.930295 systemd[1]: Starting iscsiuio.service... Dec 13 14:05:26.943053 systemd[1]: Started iscsiuio.service. Dec 13 14:05:27.007354 kernel: kauditd_printk_skb: 14 callbacks suppressed Dec 13 14:05:27.007377 kernel: audit: type=1130 audit(1734098726.978:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:26.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:26.956177 systemd[1]: Starting iscsid.service... Dec 13 14:05:27.012524 iscsid[875]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:05:27.012524 iscsid[875]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Dec 13 14:05:27.012524 iscsid[875]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 14:05:27.012524 iscsid[875]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 14:05:27.012524 iscsid[875]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 14:05:27.012524 iscsid[875]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:05:27.012524 iscsid[875]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 14:05:27.136903 kernel: audit: type=1130 audit(1734098727.046:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:27.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:26.968300 systemd[1]: Started iscsid.service. Dec 13 14:05:27.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:26.979311 systemd[1]: Starting dracut-initqueue.service... Dec 13 14:05:27.177913 kernel: audit: type=1130 audit(1734098727.141:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:27.177939 kernel: mlx5_core b322:00:02.0 enP45858s1: Link up Dec 13 14:05:27.020921 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 14:05:27.203890 kernel: audit: type=1130 audit(1734098727.182:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:27.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:27.041807 systemd[1]: Finished dracut-initqueue.service. Dec 13 14:05:27.219022 kernel: hv_netvsc 000d3ac2-b49d-000d-3ac2-b49d000d3ac2 eth0: Data path switched to VF: enP45858s1 Dec 13 14:05:27.047192 systemd[1]: Reached target remote-fs-pre.target. Dec 13 14:05:27.076891 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:05:27.235713 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:05:27.097144 systemd[1]: Reached target remote-fs.target. Dec 13 14:05:27.108662 systemd[1]: Starting dracut-pre-mount.service... Dec 13 14:05:27.129367 systemd[1]: Finished ignition-setup.service. Dec 13 14:05:27.142746 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 14:05:27.173303 systemd[1]: Finished dracut-pre-mount.service. Dec 13 14:05:27.230090 systemd-networkd[868]: enP45858s1: Link UP Dec 13 14:05:27.230160 systemd-networkd[868]: eth0: Link UP Dec 13 14:05:27.230292 systemd-networkd[868]: eth0: Gained carrier Dec 13 14:05:27.240571 systemd-networkd[868]: enP45858s1: Gained carrier Dec 13 14:05:27.261441 systemd-networkd[868]: eth0: DHCPv4 address 10.200.20.36/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 13 14:05:29.016505 systemd-networkd[868]: eth0: Gained IPv6LL Dec 13 14:05:32.531287 ignition[895]: Ignition 2.14.0 Dec 13 14:05:32.531300 ignition[895]: Stage: fetch-offline Dec 13 14:05:32.531371 ignition[895]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:05:32.531395 ignition[895]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:05:32.630645 ignition[895]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:05:32.630802 ignition[895]: parsed url from cmdline: "" Dec 13 14:05:32.632108 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 14:05:32.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:32.630806 ignition[895]: no config URL provided Dec 13 14:05:32.671361 kernel: audit: type=1130 audit(1734098732.642:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:32.643486 systemd[1]: Starting ignition-fetch.service... Dec 13 14:05:32.630811 ignition[895]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:05:32.630819 ignition[895]: no config at "/usr/lib/ignition/user.ign" Dec 13 14:05:32.630825 ignition[895]: failed to fetch config: resource requires networking Dec 13 14:05:32.631060 ignition[895]: Ignition finished successfully Dec 13 14:05:32.658487 ignition[902]: Ignition 2.14.0 Dec 13 14:05:32.658494 ignition[902]: Stage: fetch Dec 13 14:05:32.658604 ignition[902]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:05:32.658623 ignition[902]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:05:32.661305 ignition[902]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:05:32.661453 ignition[902]: parsed url from cmdline: "" Dec 13 14:05:32.661457 ignition[902]: no config URL provided Dec 13 14:05:32.661462 ignition[902]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:05:32.661480 ignition[902]: no config at "/usr/lib/ignition/user.ign" Dec 13 14:05:32.661510 ignition[902]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Dec 13 14:05:32.770056 ignition[902]: GET result: OK Dec 13 14:05:32.770141 ignition[902]: config has been read from IMDS userdata Dec 13 14:05:32.773636 unknown[902]: fetched base config from "system" Dec 13 14:05:32.805261 kernel: audit: type=1130 audit(1734098732.783:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:32.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:32.770191 ignition[902]: parsing config with SHA512: 93e09891d42f5b2dea9c98d0d0f18107342fb0bb81c8a971fb0e107158b33d59052843f46304d20eb054d78026725c419b0d1acbae0bee0ed71ad20389c5b268 Dec 13 14:05:32.773643 unknown[902]: fetched base config from "system" Dec 13 14:05:32.774198 ignition[902]: fetch: fetch complete Dec 13 14:05:32.773656 unknown[902]: fetched user config from "azure" Dec 13 14:05:32.774206 ignition[902]: fetch: fetch passed Dec 13 14:05:32.779139 systemd[1]: Finished ignition-fetch.service. Dec 13 14:05:32.848279 kernel: audit: type=1130 audit(1734098732.824:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:32.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:32.774250 ignition[902]: Ignition finished successfully Dec 13 14:05:32.784237 systemd[1]: Starting ignition-kargs.service... Dec 13 14:05:32.812997 ignition[908]: Ignition 2.14.0 Dec 13 14:05:32.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:32.821022 systemd[1]: Finished ignition-kargs.service. Dec 13 14:05:32.813003 ignition[908]: Stage: kargs Dec 13 14:05:32.900448 kernel: audit: type=1130 audit(1734098732.861:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:32.826230 systemd[1]: Starting ignition-disks.service... Dec 13 14:05:32.813116 ignition[908]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:05:32.857775 systemd[1]: Finished ignition-disks.service. Dec 13 14:05:32.813135 ignition[908]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:05:32.862415 systemd[1]: Reached target initrd-root-device.target. Dec 13 14:05:32.815834 ignition[908]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:05:32.887965 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:05:32.818239 ignition[908]: kargs: kargs passed Dec 13 14:05:32.896895 systemd[1]: Reached target local-fs.target. Dec 13 14:05:32.818297 ignition[908]: Ignition finished successfully Dec 13 14:05:32.904810 systemd[1]: Reached target sysinit.target. Dec 13 14:05:32.837191 ignition[914]: Ignition 2.14.0 Dec 13 14:05:32.915394 systemd[1]: Reached target basic.target. Dec 13 14:05:32.837200 ignition[914]: Stage: disks Dec 13 14:05:32.926616 systemd[1]: Starting systemd-fsck-root.service... Dec 13 14:05:32.837351 ignition[914]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:05:32.837370 ignition[914]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:05:32.840441 ignition[914]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:05:32.850162 ignition[914]: disks: disks passed Dec 13 14:05:32.850221 ignition[914]: Ignition finished successfully Dec 13 14:05:33.033574 systemd-fsck[922]: ROOT: clean, 621/7326000 files, 481076/7359488 blocks Dec 13 14:05:33.040463 systemd[1]: Finished systemd-fsck-root.service. Dec 13 14:05:33.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:33.063458 systemd[1]: Mounting sysroot.mount... Dec 13 14:05:33.067523 kernel: audit: type=1130 audit(1734098733.044:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:33.084346 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 14:05:33.084954 systemd[1]: Mounted sysroot.mount. Dec 13 14:05:33.088943 systemd[1]: Reached target initrd-root-fs.target. Dec 13 14:05:33.197894 systemd[1]: Mounting sysroot-usr.mount... Dec 13 14:05:33.202637 systemd[1]: Starting flatcar-metadata-hostname.service... Dec 13 14:05:33.209793 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 14:05:33.209830 systemd[1]: Reached target ignition-diskful.target. Dec 13 14:05:33.215736 systemd[1]: Mounted sysroot-usr.mount. Dec 13 14:05:33.289166 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:05:33.294235 systemd[1]: Starting initrd-setup-root.service... Dec 13 14:05:33.322523 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (933) Dec 13 14:05:33.322577 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 14:05:33.327433 kernel: BTRFS info (device sda6): using free space tree Dec 13 14:05:33.332097 kernel: BTRFS info (device sda6): has skinny extents Dec 13 14:05:33.334364 initrd-setup-root[938]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 14:05:33.341563 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:05:33.370607 initrd-setup-root[964]: cut: /sysroot/etc/group: No such file or directory Dec 13 14:05:33.404614 initrd-setup-root[972]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 14:05:33.413167 initrd-setup-root[980]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 14:05:34.290752 systemd[1]: Finished initrd-setup-root.service. Dec 13 14:05:34.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:34.314518 systemd[1]: Starting ignition-mount.service... Dec 13 14:05:34.324040 kernel: audit: type=1130 audit(1734098734.295:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:34.320581 systemd[1]: Starting sysroot-boot.service... Dec 13 14:05:34.333711 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Dec 13 14:05:34.333868 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Dec 13 14:05:34.353533 systemd[1]: Finished sysroot-boot.service. Dec 13 14:05:34.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:34.378342 kernel: audit: type=1130 audit(1734098734.357:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:34.411410 ignition[1002]: INFO : Ignition 2.14.0 Dec 13 14:05:34.411410 ignition[1002]: INFO : Stage: mount Dec 13 14:05:34.420348 ignition[1002]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:05:34.420348 ignition[1002]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:05:34.434000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:34.462580 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:05:34.462580 ignition[1002]: INFO : mount: mount passed Dec 13 14:05:34.462580 ignition[1002]: INFO : Ignition finished successfully Dec 13 14:05:34.480287 kernel: audit: type=1130 audit(1734098734.434:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:34.430616 systemd[1]: Finished ignition-mount.service. Dec 13 14:05:36.027914 coreos-metadata[932]: Dec 13 14:05:36.027 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 13 14:05:36.035862 coreos-metadata[932]: Dec 13 14:05:36.030 INFO Fetch successful Dec 13 14:05:36.067921 coreos-metadata[932]: Dec 13 14:05:36.067 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Dec 13 14:05:36.091282 coreos-metadata[932]: Dec 13 14:05:36.091 INFO Fetch successful Dec 13 14:05:36.114868 coreos-metadata[932]: Dec 13 14:05:36.114 INFO wrote hostname ci-3510.3.6-a-c740448bc5 to /sysroot/etc/hostname Dec 13 14:05:36.122967 systemd[1]: Finished flatcar-metadata-hostname.service. Dec 13 14:05:36.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:36.128749 systemd[1]: Starting ignition-files.service... Dec 13 14:05:36.154605 kernel: audit: type=1130 audit(1734098736.127:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:36.153898 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:05:36.172334 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1012) Dec 13 14:05:36.184865 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 14:05:36.184923 kernel: BTRFS info (device sda6): using free space tree Dec 13 14:05:36.184943 kernel: BTRFS info (device sda6): has skinny extents Dec 13 14:05:36.194448 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:05:36.210937 ignition[1031]: INFO : Ignition 2.14.0 Dec 13 14:05:36.210937 ignition[1031]: INFO : Stage: files Dec 13 14:05:36.221912 ignition[1031]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:05:36.221912 ignition[1031]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:05:36.221912 ignition[1031]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:05:36.221912 ignition[1031]: DEBUG : files: compiled without relabeling support, skipping Dec 13 14:05:36.221912 ignition[1031]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 14:05:36.221912 ignition[1031]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 14:05:36.395781 ignition[1031]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 14:05:36.404449 ignition[1031]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 14:05:36.412248 ignition[1031]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 14:05:36.411680 unknown[1031]: wrote ssh authorized keys file for user: core Dec 13 14:05:36.427190 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 14:05:36.437627 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 14:05:36.437627 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 14:05:36.437627 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Dec 13 14:05:36.671611 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 14:05:36.781781 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 14:05:36.792494 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 14:05:36.792494 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 14:05:36.792494 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 14:05:36.792494 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 14:05:36.792494 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 14:05:36.792494 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 14:05:36.792494 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 14:05:36.792494 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 14:05:36.792494 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:05:36.792494 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:05:36.792494 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 14:05:36.792494 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 14:05:36.792494 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Dec 13 14:05:36.792494 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:05:36.950976 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (1034) Dec 13 14:05:36.841206 systemd[1]: mnt-oem895853562.mount: Deactivated successfully. Dec 13 14:05:36.956022 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem895853562" Dec 13 14:05:36.956022 ignition[1031]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem895853562": device or resource busy Dec 13 14:05:36.956022 ignition[1031]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem895853562", trying btrfs: device or resource busy Dec 13 14:05:36.956022 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem895853562" Dec 13 14:05:36.956022 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem895853562" Dec 13 14:05:36.956022 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem895853562" Dec 13 14:05:36.956022 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem895853562" Dec 13 14:05:36.956022 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Dec 13 14:05:36.956022 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 14:05:36.956022 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(f): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:05:36.956022 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(10): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1501581196" Dec 13 14:05:36.956022 ignition[1031]: CRITICAL : files: createFilesystemsFiles: createFiles: op(f): op(10): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1501581196": device or resource busy Dec 13 14:05:36.956022 ignition[1031]: ERROR : files: createFilesystemsFiles: createFiles: op(f): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1501581196", trying btrfs: device or resource busy Dec 13 14:05:36.956022 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1501581196" Dec 13 14:05:37.110847 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1501581196" Dec 13 14:05:37.110847 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [started] unmounting "/mnt/oem1501581196" Dec 13 14:05:37.110847 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [finished] unmounting "/mnt/oem1501581196" Dec 13 14:05:37.110847 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 14:05:37.110847 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 14:05:37.110847 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Dec 13 14:05:37.363971 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET result: OK Dec 13 14:05:37.608456 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 14:05:37.621407 ignition[1031]: INFO : files: op(14): [started] processing unit "waagent.service" Dec 13 14:05:37.621407 ignition[1031]: INFO : files: op(14): [finished] processing unit "waagent.service" Dec 13 14:05:37.621407 ignition[1031]: INFO : files: op(15): [started] processing unit "nvidia.service" Dec 13 14:05:37.621407 ignition[1031]: INFO : files: op(15): [finished] processing unit "nvidia.service" Dec 13 14:05:37.621407 ignition[1031]: INFO : files: op(16): [started] processing unit "containerd.service" Dec 13 14:05:37.621407 ignition[1031]: INFO : files: op(16): op(17): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 14:05:37.621407 ignition[1031]: INFO : files: op(16): op(17): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 14:05:37.621407 ignition[1031]: INFO : files: op(16): [finished] processing unit "containerd.service" Dec 13 14:05:37.621407 ignition[1031]: INFO : files: op(18): [started] processing unit "prepare-helm.service" Dec 13 14:05:37.621407 ignition[1031]: INFO : files: op(18): op(19): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 14:05:37.621407 ignition[1031]: INFO : files: op(18): op(19): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 14:05:37.621407 ignition[1031]: INFO : files: op(18): [finished] processing unit "prepare-helm.service" Dec 13 14:05:37.621407 ignition[1031]: INFO : files: op(1a): [started] setting preset to enabled for "waagent.service" Dec 13 14:05:37.621407 ignition[1031]: INFO : files: op(1a): [finished] setting preset to enabled for "waagent.service" Dec 13 14:05:37.621407 ignition[1031]: INFO : files: op(1b): [started] setting preset to enabled for "nvidia.service" Dec 13 14:05:37.621407 ignition[1031]: INFO : files: op(1b): [finished] setting preset to enabled for "nvidia.service" Dec 13 14:05:37.621407 ignition[1031]: INFO : files: op(1c): [started] setting preset to enabled for "prepare-helm.service" Dec 13 14:05:37.621407 ignition[1031]: INFO : files: op(1c): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 14:05:37.621407 ignition[1031]: INFO : files: createResultFile: createFiles: op(1d): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:05:37.621407 ignition[1031]: INFO : files: createResultFile: createFiles: op(1d): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:05:37.621407 ignition[1031]: INFO : files: files passed Dec 13 14:05:37.621407 ignition[1031]: INFO : Ignition finished successfully Dec 13 14:05:37.971870 kernel: audit: type=1130 audit(1734098737.632:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.971900 kernel: audit: type=1130 audit(1734098737.689:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.971910 kernel: audit: type=1131 audit(1734098737.689:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.971926 kernel: audit: type=1130 audit(1734098737.741:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.971937 kernel: audit: type=1130 audit(1734098737.828:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.971949 kernel: audit: type=1131 audit(1734098737.853:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.971958 kernel: audit: type=1130 audit(1734098737.952:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.689000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.853000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.627879 systemd[1]: Finished ignition-files.service. Dec 13 14:05:37.635508 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 14:05:37.661459 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 14:05:38.000563 initrd-setup-root-after-ignition[1056]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 14:05:37.662369 systemd[1]: Starting ignition-quench.service... Dec 13 14:05:37.675688 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 14:05:37.675799 systemd[1]: Finished ignition-quench.service. Dec 13 14:05:38.037000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.690557 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 14:05:37.741596 systemd[1]: Reached target ignition-complete.target. Dec 13 14:05:38.073469 kernel: audit: type=1131 audit(1734098738.037:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.783999 systemd[1]: Starting initrd-parse-etc.service... Dec 13 14:05:37.818773 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 14:05:37.818877 systemd[1]: Finished initrd-parse-etc.service. Dec 13 14:05:37.853477 systemd[1]: Reached target initrd-fs.target. Dec 13 14:05:37.879641 systemd[1]: Reached target initrd.target. Dec 13 14:05:37.893225 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 14:05:37.894196 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 14:05:37.944252 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 14:05:37.955584 systemd[1]: Starting initrd-cleanup.service... Dec 13 14:05:37.991552 systemd[1]: Stopped target nss-lookup.target. Dec 13 14:05:38.005430 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 14:05:38.169000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.019296 systemd[1]: Stopped target timers.target. Dec 13 14:05:38.027770 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 14:05:38.203432 kernel: audit: type=1131 audit(1734098738.169:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.027885 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 14:05:38.208000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.060184 systemd[1]: Stopped target initrd.target. Dec 13 14:05:38.241462 kernel: audit: type=1131 audit(1734098738.208:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.069072 systemd[1]: Stopped target basic.target. Dec 13 14:05:38.077478 systemd[1]: Stopped target ignition-complete.target. Dec 13 14:05:38.273402 kernel: audit: type=1131 audit(1734098738.241:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.241000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.250000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.086214 systemd[1]: Stopped target ignition-diskful.target. Dec 13 14:05:38.276000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.095872 systemd[1]: Stopped target initrd-root-device.target. Dec 13 14:05:38.104583 systemd[1]: Stopped target remote-fs.target. Dec 13 14:05:38.112661 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 14:05:38.308678 ignition[1069]: INFO : Ignition 2.14.0 Dec 13 14:05:38.308678 ignition[1069]: INFO : Stage: umount Dec 13 14:05:38.308678 ignition[1069]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:05:38.308678 ignition[1069]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:05:38.308678 ignition[1069]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:05:38.308678 ignition[1069]: INFO : umount: umount passed Dec 13 14:05:38.308678 ignition[1069]: INFO : Ignition finished successfully Dec 13 14:05:38.336000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.356000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.367000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.378000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.383000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.121283 systemd[1]: Stopped target sysinit.target. Dec 13 14:05:38.392000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.133047 systemd[1]: Stopped target local-fs.target. Dec 13 14:05:38.400000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.141971 systemd[1]: Stopped target local-fs-pre.target. Dec 13 14:05:38.152305 systemd[1]: Stopped target swap.target. Dec 13 14:05:38.416000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.161008 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 14:05:38.161154 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 14:05:38.190243 systemd[1]: Stopped target cryptsetup.target. Dec 13 14:05:38.198402 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 14:05:38.198509 systemd[1]: Stopped dracut-initqueue.service. Dec 13 14:05:38.230519 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 14:05:38.230706 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 14:05:38.242618 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 14:05:38.242728 systemd[1]: Stopped ignition-files.service. Dec 13 14:05:38.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.250556 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 14:05:38.250663 systemd[1]: Stopped flatcar-metadata-hostname.service. Dec 13 14:05:38.277867 systemd[1]: Stopping ignition-mount.service... Dec 13 14:05:38.303848 systemd[1]: Stopping iscsiuio.service... Dec 13 14:05:38.526000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.317253 systemd[1]: Stopping sysroot-boot.service... Dec 13 14:05:38.534000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.332387 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 14:05:38.332568 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 14:05:38.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.550000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.337385 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 14:05:38.558000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.559000 audit: BPF prog-id=6 op=UNLOAD Dec 13 14:05:38.337489 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 14:05:38.359082 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 14:05:38.359826 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 14:05:38.584000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.359945 systemd[1]: Stopped iscsiuio.service. Dec 13 14:05:38.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.368538 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 14:05:38.607000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.368648 systemd[1]: Stopped ignition-mount.service. Dec 13 14:05:38.618000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.378743 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 14:05:38.378854 systemd[1]: Stopped ignition-disks.service. Dec 13 14:05:38.383597 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 14:05:38.641000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.383668 systemd[1]: Stopped ignition-kargs.service. Dec 13 14:05:38.392571 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 14:05:38.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.392616 systemd[1]: Stopped ignition-fetch.service. Dec 13 14:05:38.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.400376 systemd[1]: Stopped target network.target. Dec 13 14:05:38.681000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.408828 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 14:05:38.716243 kernel: hv_netvsc 000d3ac2-b49d-000d-3ac2-b49d000d3ac2 eth0: Data path switched from VF: enP45858s1 Dec 13 14:05:38.704000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.710000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.408880 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 14:05:38.720000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.417135 systemd[1]: Stopped target paths.target. Dec 13 14:05:38.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.730000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.424674 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 14:05:38.433545 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 14:05:38.440835 systemd[1]: Stopped target slices.target. Dec 13 14:05:38.455191 systemd[1]: Stopped target sockets.target. Dec 13 14:05:38.463758 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 14:05:38.463794 systemd[1]: Closed iscsid.socket. Dec 13 14:05:38.473356 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 14:05:38.473392 systemd[1]: Closed iscsiuio.socket. Dec 13 14:05:38.483703 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 14:05:38.483747 systemd[1]: Stopped ignition-setup.service. Dec 13 14:05:38.493508 systemd[1]: Stopping systemd-networkd.service... Dec 13 14:05:38.501527 systemd-networkd[868]: eth0: DHCPv6 lease lost Dec 13 14:05:38.786000 audit: BPF prog-id=9 op=UNLOAD Dec 13 14:05:38.502847 systemd[1]: Stopping systemd-resolved.service... Dec 13 14:05:38.518095 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 14:05:38.518198 systemd[1]: Stopped systemd-networkd.service. Dec 13 14:05:38.527034 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 14:05:38.810000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.527127 systemd[1]: Stopped systemd-resolved.service. Dec 13 14:05:38.535750 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 14:05:38.535840 systemd[1]: Finished initrd-cleanup.service. Dec 13 14:05:38.550681 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 14:05:38.550802 systemd[1]: Stopped sysroot-boot.service. Dec 13 14:05:38.846000 audit: BPF prog-id=5 op=UNLOAD Dec 13 14:05:38.846000 audit: BPF prog-id=4 op=UNLOAD Dec 13 14:05:38.846000 audit: BPF prog-id=3 op=UNLOAD Dec 13 14:05:38.848000 audit: BPF prog-id=8 op=UNLOAD Dec 13 14:05:38.848000 audit: BPF prog-id=7 op=UNLOAD Dec 13 14:05:38.560222 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 14:05:38.560263 systemd[1]: Closed systemd-networkd.socket. Dec 13 14:05:38.573731 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 14:05:38.573782 systemd[1]: Stopped initrd-setup-root.service. Dec 13 14:05:38.586062 systemd[1]: Stopping network-cleanup.service... Dec 13 14:05:38.594840 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 14:05:38.890453 systemd-journald[276]: Received SIGTERM from PID 1 (n/a). Dec 13 14:05:38.890515 iscsid[875]: iscsid shutting down. Dec 13 14:05:38.594901 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 14:05:38.599879 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:05:38.599935 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:05:38.613969 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 14:05:38.614017 systemd[1]: Stopped systemd-modules-load.service. Dec 13 14:05:38.618911 systemd[1]: Stopping systemd-udevd.service... Dec 13 14:05:38.634438 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 14:05:38.635029 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 14:05:38.635151 systemd[1]: Stopped systemd-udevd.service. Dec 13 14:05:38.642330 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 14:05:38.642391 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 14:05:38.651508 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 14:05:38.651546 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 14:05:38.656345 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 14:05:38.656409 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 14:05:38.664910 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 14:05:38.664953 systemd[1]: Stopped dracut-cmdline.service. Dec 13 14:05:38.674153 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 14:05:38.674189 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 14:05:38.682942 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 14:05:38.692355 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 14:05:38.692415 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Dec 13 14:05:38.706115 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 14:05:38.706187 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 14:05:38.710656 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 14:05:38.710696 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 14:05:38.722151 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 13 14:05:38.722635 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 14:05:38.722732 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 14:05:38.803573 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 14:05:38.803695 systemd[1]: Stopped network-cleanup.service. Dec 13 14:05:38.811043 systemd[1]: Reached target initrd-switch-root.target. Dec 13 14:05:38.821681 systemd[1]: Starting initrd-switch-root.service... Dec 13 14:05:38.842420 systemd[1]: Switching root. Dec 13 14:05:38.891432 systemd-journald[276]: Journal stopped Dec 13 14:05:57.342952 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 14:05:57.342974 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 14:05:57.342985 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 14:05:57.342995 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 14:05:57.343003 kernel: SELinux: policy capability open_perms=1 Dec 13 14:05:57.343010 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 14:05:57.343019 kernel: SELinux: policy capability always_check_network=0 Dec 13 14:05:57.343028 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 14:05:57.343036 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 14:05:57.343044 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 14:05:57.343052 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 14:05:57.343063 systemd[1]: Successfully loaded SELinux policy in 118.789ms. Dec 13 14:05:57.343073 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.676ms. Dec 13 14:05:57.343084 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:05:57.343094 systemd[1]: Detected virtualization microsoft. Dec 13 14:05:57.343105 systemd[1]: Detected architecture arm64. Dec 13 14:05:57.343114 systemd[1]: Detected first boot. Dec 13 14:05:57.343123 systemd[1]: Hostname set to . Dec 13 14:05:57.343132 systemd[1]: Initializing machine ID from random generator. Dec 13 14:05:57.343141 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 14:05:57.343152 kernel: kauditd_printk_skb: 39 callbacks suppressed Dec 13 14:05:57.343162 kernel: audit: type=1400 audit(1734098745.698:88): avc: denied { associate } for pid=1120 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 14:05:57.343173 kernel: audit: type=1300 audit(1734098745.698:88): arch=c00000b7 syscall=5 success=yes exit=0 a0=400014766c a1=40000c8af8 a2=40000cea00 a3=32 items=0 ppid=1103 pid=1120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:05:57.343184 kernel: audit: type=1327 audit(1734098745.698:88): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:05:57.343193 kernel: audit: type=1400 audit(1734098745.712:89): avc: denied { associate } for pid=1120 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 14:05:57.343202 kernel: audit: type=1300 audit(1734098745.712:89): arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=4000147745 a2=1ed a3=0 items=2 ppid=1103 pid=1120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:05:57.343211 kernel: audit: type=1307 audit(1734098745.712:89): cwd="/" Dec 13 14:05:57.343221 kernel: audit: type=1302 audit(1734098745.712:89): item=0 name=(null) inode=2 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:05:57.343230 kernel: audit: type=1302 audit(1734098745.712:89): item=1 name=(null) inode=3 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:05:57.343240 kernel: audit: type=1327 audit(1734098745.712:89): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:05:57.343248 systemd[1]: Populated /etc with preset unit settings. Dec 13 14:05:57.343258 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:05:57.343269 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:05:57.343279 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:05:57.343290 systemd[1]: Queued start job for default target multi-user.target. Dec 13 14:05:57.343299 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 14:05:57.343309 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 14:05:57.343327 systemd[1]: Created slice system-getty.slice. Dec 13 14:05:57.343338 systemd[1]: Created slice system-modprobe.slice. Dec 13 14:05:57.343347 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 14:05:57.343359 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 14:05:57.343369 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 14:05:57.343379 systemd[1]: Created slice user.slice. Dec 13 14:05:57.343388 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:05:57.343398 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 14:05:57.343407 systemd[1]: Set up automount boot.automount. Dec 13 14:05:57.343416 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 14:05:57.343425 systemd[1]: Reached target integritysetup.target. Dec 13 14:05:57.343435 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:05:57.343444 systemd[1]: Reached target remote-fs.target. Dec 13 14:05:57.343454 systemd[1]: Reached target slices.target. Dec 13 14:05:57.343464 systemd[1]: Reached target swap.target. Dec 13 14:05:57.343474 systemd[1]: Reached target torcx.target. Dec 13 14:05:57.343484 systemd[1]: Reached target veritysetup.target. Dec 13 14:05:57.343494 systemd[1]: Listening on systemd-coredump.socket. Dec 13 14:05:57.343504 systemd[1]: Listening on systemd-initctl.socket. Dec 13 14:05:57.343513 kernel: audit: type=1400 audit(1734098756.780:90): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:05:57.343523 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 14:05:57.343534 kernel: audit: type=1335 audit(1734098756.780:91): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 13 14:05:57.343543 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 14:05:57.343552 systemd[1]: Listening on systemd-journald.socket. Dec 13 14:05:57.343562 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:05:57.343571 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:05:57.343581 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:05:57.343592 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 14:05:57.343602 systemd[1]: Mounting dev-hugepages.mount... Dec 13 14:05:57.343611 systemd[1]: Mounting dev-mqueue.mount... Dec 13 14:05:57.343621 systemd[1]: Mounting media.mount... Dec 13 14:05:57.343630 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 14:05:57.343640 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 14:05:57.343650 systemd[1]: Mounting tmp.mount... Dec 13 14:05:57.343661 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 14:05:57.343671 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:05:57.343680 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:05:57.343698 systemd[1]: Starting modprobe@configfs.service... Dec 13 14:05:57.343713 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:05:57.343723 systemd[1]: Starting modprobe@drm.service... Dec 13 14:05:57.343732 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:05:57.343741 systemd[1]: Starting modprobe@fuse.service... Dec 13 14:05:57.343751 systemd[1]: Starting modprobe@loop.service... Dec 13 14:05:57.343762 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 14:05:57.343772 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Dec 13 14:05:57.343782 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Dec 13 14:05:57.343792 systemd[1]: Starting systemd-journald.service... Dec 13 14:05:57.343801 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:05:57.343810 systemd[1]: Starting systemd-network-generator.service... Dec 13 14:05:57.343820 systemd[1]: Starting systemd-remount-fs.service... Dec 13 14:05:57.343829 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:05:57.343838 systemd[1]: Mounted dev-hugepages.mount. Dec 13 14:05:57.343849 systemd[1]: Mounted dev-mqueue.mount. Dec 13 14:05:57.343858 systemd[1]: Mounted media.mount. Dec 13 14:05:57.343867 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 14:05:57.343877 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 14:05:57.343886 systemd[1]: Mounted tmp.mount. Dec 13 14:05:57.343896 kernel: loop: module loaded Dec 13 14:05:57.343906 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 14:05:57.343916 kernel: audit: type=1130 audit(1734098757.183:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:57.343926 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:05:57.343936 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 14:05:57.343945 kernel: audit: type=1130 audit(1734098757.208:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:57.343954 systemd[1]: Finished modprobe@configfs.service. Dec 13 14:05:57.343964 kernel: fuse: init (API version 7.34) Dec 13 14:05:57.343973 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:05:57.343983 kernel: audit: type=1130 audit(1734098757.247:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:57.343992 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:05:57.344003 kernel: audit: type=1131 audit(1734098757.247:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:57.344012 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:05:57.344024 kernel: audit: type=1130 audit(1734098757.295:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:57.344033 systemd[1]: Finished modprobe@drm.service. Dec 13 14:05:57.344043 kernel: audit: type=1131 audit(1734098757.295:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:57.344056 systemd-journald[1215]: Journal started Dec 13 14:05:57.344097 systemd-journald[1215]: Runtime Journal (/run/log/journal/5c6570d925a745d68cdc418ef0a6009a) is 8.0M, max 78.5M, 70.5M free. Dec 13 14:05:56.780000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 13 14:05:57.183000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:57.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:57.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:57.247000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:57.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:57.295000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:57.357490 kernel: audit: type=1305 audit(1734098757.340:98): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 14:05:57.357552 systemd[1]: Started systemd-journald.service. Dec 13 14:05:57.340000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 14:05:57.340000 audit[1215]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=5 a1=ffffe8630930 a2=4000 a3=1 items=0 ppid=1 pid=1215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:05:57.362377 kernel: audit: type=1300 audit(1734098757.340:98): arch=c00000b7 syscall=211 success=yes exit=60 a0=5 a1=ffffe8630930 a2=4000 a3=1 items=0 ppid=1 pid=1215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:05:57.340000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 14:05:57.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:57.344000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:57.391864 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:05:57.392341 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:05:57.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:57.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:57.396000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:57.397621 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 14:05:57.397907 systemd[1]: Finished modprobe@fuse.service. Dec 13 14:05:57.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:57.401000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:57.402598 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:05:57.402955 systemd[1]: Finished modprobe@loop.service. Dec 13 14:05:57.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:57.407000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:57.408005 systemd[1]: Finished systemd-network-generator.service. Dec 13 14:05:57.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:57.413610 systemd[1]: Finished systemd-remount-fs.service. Dec 13 14:05:57.417000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:57.418831 systemd[1]: Reached target network-pre.target. Dec 13 14:05:57.424954 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 14:05:57.430914 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 14:05:57.439302 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 14:05:57.441433 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 14:05:57.447408 systemd[1]: Starting systemd-journal-flush.service... Dec 13 14:05:57.452534 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:05:57.453996 systemd[1]: Starting systemd-random-seed.service... Dec 13 14:05:57.458982 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:05:57.460433 systemd[1]: Starting systemd-sysusers.service... Dec 13 14:05:57.466827 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:05:57.471000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:57.472156 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 14:05:57.477238 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 14:05:57.483986 systemd[1]: Starting systemd-udev-settle.service... Dec 13 14:05:57.492162 udevadm[1270]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 14:05:57.509891 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:05:57.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:57.516616 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:05:57.546351 systemd[1]: Finished systemd-random-seed.service. Dec 13 14:05:57.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:57.551655 systemd[1]: Reached target first-boot-complete.target. Dec 13 14:05:57.556698 systemd-journald[1215]: Time spent on flushing to /var/log/journal/5c6570d925a745d68cdc418ef0a6009a is 13.149ms for 1040 entries. Dec 13 14:05:57.556698 systemd-journald[1215]: System Journal (/var/log/journal/5c6570d925a745d68cdc418ef0a6009a) is 8.0M, max 2.6G, 2.6G free. Dec 13 14:05:57.642722 systemd-journald[1215]: Received client request to flush runtime journal. Dec 13 14:05:57.622000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:57.618064 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:05:57.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:57.643753 systemd[1]: Finished systemd-journal-flush.service. Dec 13 14:05:58.476870 systemd[1]: Finished systemd-sysusers.service. Dec 13 14:05:58.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:58.482808 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:05:59.947896 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:05:59.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:00.050643 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 14:06:00.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:00.056955 systemd[1]: Starting systemd-udevd.service... Dec 13 14:06:00.075627 systemd-udevd[1284]: Using default interface naming scheme 'v252'. Dec 13 14:06:00.123142 systemd[1]: Started systemd-udevd.service. Dec 13 14:06:00.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:00.144358 systemd[1]: Starting systemd-networkd.service... Dec 13 14:06:00.161567 systemd[1]: Starting systemd-userdbd.service... Dec 13 14:06:00.176753 systemd[1]: Found device dev-ttyAMA0.device. Dec 13 14:06:00.213817 systemd[1]: Started systemd-userdbd.service. Dec 13 14:06:00.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:00.249350 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 14:06:00.277000 audit[1293]: AVC avc: denied { confidentiality } for pid=1293 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 14:06:00.293370 kernel: hv_vmbus: registering driver hv_balloon Dec 13 14:06:00.293449 kernel: hv_vmbus: registering driver hyperv_fb Dec 13 14:06:00.305133 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Dec 13 14:06:00.305225 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Dec 13 14:06:00.313014 kernel: Console: switching to colour dummy device 80x25 Dec 13 14:06:00.321265 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Dec 13 14:06:00.321404 kernel: hv_balloon: Memory hot add disabled on ARM64 Dec 13 14:06:00.333149 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 14:06:00.277000 audit[1293]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=aaaae92bb1b0 a1=aa2c a2=ffffb78824b0 a3=aaaae9219010 items=12 ppid=1284 pid=1293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:06:00.348469 kernel: hv_utils: Registering HyperV Utility Driver Dec 13 14:06:00.348538 kernel: hv_vmbus: registering driver hv_utils Dec 13 14:06:00.277000 audit: CWD cwd="/" Dec 13 14:06:00.277000 audit: PATH item=0 name=(null) inode=5648 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:06:00.277000 audit: PATH item=1 name=(null) inode=11203 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:06:00.277000 audit: PATH item=2 name=(null) inode=11203 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:06:00.277000 audit: PATH item=3 name=(null) inode=11204 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:06:00.277000 audit: PATH item=4 name=(null) inode=11203 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:06:00.277000 audit: PATH item=5 name=(null) inode=11205 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:06:00.277000 audit: PATH item=6 name=(null) inode=11203 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:06:00.277000 audit: PATH item=7 name=(null) inode=11206 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:06:00.277000 audit: PATH item=8 name=(null) inode=11203 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:06:00.277000 audit: PATH item=9 name=(null) inode=11207 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:06:00.277000 audit: PATH item=10 name=(null) inode=11203 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:06:00.277000 audit: PATH item=11 name=(null) inode=11208 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:06:00.277000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 14:06:00.359484 kernel: hv_utils: Heartbeat IC version 3.0 Dec 13 14:06:00.359556 kernel: hv_utils: Shutdown IC version 3.2 Dec 13 14:06:00.359571 kernel: hv_utils: TimeSync IC version 4.0 Dec 13 14:06:00.206625 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1304) Dec 13 14:06:00.417859 systemd-journald[1215]: Time jumped backwards, rotating. Dec 13 14:06:00.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:00.249518 systemd[1]: dev-disk-by\x2dlabel-OEM.device was skipped because of an unmet condition check (ConditionPathExists=!/usr/.noupdate). Dec 13 14:06:00.254038 systemd[1]: Finished systemd-udev-settle.service. Dec 13 14:06:00.260376 systemd[1]: Starting lvm2-activation-early.service... Dec 13 14:06:00.419255 systemd-networkd[1305]: lo: Link UP Dec 13 14:06:00.419265 systemd-networkd[1305]: lo: Gained carrier Dec 13 14:06:00.419669 systemd-networkd[1305]: Enumeration completed Dec 13 14:06:00.419848 systemd[1]: Started systemd-networkd.service. Dec 13 14:06:00.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:00.425728 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 14:06:00.456560 systemd-networkd[1305]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:06:00.503618 kernel: mlx5_core b322:00:02.0 enP45858s1: Link up Dec 13 14:06:00.532480 systemd-networkd[1305]: enP45858s1: Link UP Dec 13 14:06:00.532649 kernel: hv_netvsc 000d3ac2-b49d-000d-3ac2-b49d000d3ac2 eth0: Data path switched to VF: enP45858s1 Dec 13 14:06:00.532944 systemd-networkd[1305]: eth0: Link UP Dec 13 14:06:00.533031 systemd-networkd[1305]: eth0: Gained carrier Dec 13 14:06:00.541947 systemd-networkd[1305]: enP45858s1: Gained carrier Dec 13 14:06:00.551719 systemd-networkd[1305]: eth0: DHCPv4 address 10.200.20.36/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 13 14:06:00.878881 lvm[1363]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:06:00.910580 systemd[1]: Finished lvm2-activation-early.service. Dec 13 14:06:00.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:00.915739 systemd[1]: Reached target cryptsetup.target. Dec 13 14:06:00.921779 systemd[1]: Starting lvm2-activation.service... Dec 13 14:06:00.925957 lvm[1366]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:06:00.947722 systemd[1]: Finished lvm2-activation.service. Dec 13 14:06:00.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:00.952472 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:06:00.957182 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 14:06:00.957215 systemd[1]: Reached target local-fs.target. Dec 13 14:06:00.961741 systemd[1]: Reached target machines.target. Dec 13 14:06:00.967484 systemd[1]: Starting ldconfig.service... Dec 13 14:06:00.971712 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:06:00.971783 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:06:00.973145 systemd[1]: Starting systemd-boot-update.service... Dec 13 14:06:00.978589 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 14:06:00.985252 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 14:06:00.991346 systemd[1]: Starting systemd-sysext.service... Dec 13 14:06:00.995623 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1369 (bootctl) Dec 13 14:06:00.996996 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 14:06:02.002276 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 14:06:02.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:02.012111 kernel: kauditd_printk_skb: 42 callbacks suppressed Dec 13 14:06:02.012143 kernel: audit: type=1130 audit(1734098762.007:125): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:02.101783 systemd-networkd[1305]: eth0: Gained IPv6LL Dec 13 14:06:02.104590 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 14:06:02.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:02.128614 kernel: audit: type=1130 audit(1734098762.108:126): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:02.172642 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 14:06:02.177946 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 14:06:02.178229 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 14:06:02.540617 kernel: loop0: detected capacity change from 0 to 194512 Dec 13 14:06:03.379261 systemd-fsck[1377]: fsck.fat 4.2 (2021-01-31) Dec 13 14:06:03.379261 systemd-fsck[1377]: /dev/sda1: 236 files, 117175/258078 clusters Dec 13 14:06:03.382054 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 14:06:03.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:03.398341 systemd[1]: Mounting boot.mount... Dec 13 14:06:03.409907 kernel: audit: type=1130 audit(1734098763.386:127): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:03.496688 systemd[1]: Mounted boot.mount. Dec 13 14:06:03.507245 systemd[1]: Finished systemd-boot-update.service. Dec 13 14:06:03.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:03.528731 kernel: audit: type=1130 audit(1734098763.510:128): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:04.108630 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 14:06:04.117057 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 14:06:04.117797 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 14:06:04.143713 kernel: audit: type=1130 audit(1734098764.121:129): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:04.143832 kernel: loop1: detected capacity change from 0 to 194512 Dec 13 14:06:04.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:04.151657 (sd-sysext)[1396]: Using extensions 'kubernetes'. Dec 13 14:06:04.152037 (sd-sysext)[1396]: Merged extensions into '/usr'. Dec 13 14:06:04.169851 systemd[1]: Mounting usr-share-oem.mount... Dec 13 14:06:04.175061 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:06:04.176681 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:06:04.185786 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:06:04.192104 systemd[1]: Starting modprobe@loop.service... Dec 13 14:06:04.196924 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:06:04.197215 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:06:04.200436 systemd[1]: Mounted usr-share-oem.mount. Dec 13 14:06:04.205644 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:06:04.205940 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:06:04.209000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:04.212028 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:06:04.212199 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:06:04.209000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:04.248362 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:06:04.248713 systemd[1]: Finished modprobe@loop.service. Dec 13 14:06:04.249025 kernel: audit: type=1130 audit(1734098764.209:130): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:04.249103 kernel: audit: type=1131 audit(1734098764.209:131): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:04.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:04.271410 kernel: audit: type=1130 audit(1734098764.227:132): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:04.227000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:04.272496 systemd[1]: Finished systemd-sysext.service. Dec 13 14:06:04.289369 kernel: audit: type=1131 audit(1734098764.227:133): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:04.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:04.310908 kernel: audit: type=1130 audit(1734098764.256:134): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:04.256000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:04.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:04.312588 systemd[1]: Starting ensure-sysext.service... Dec 13 14:06:04.317264 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:06:04.317434 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:06:04.318772 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 14:06:04.327495 systemd[1]: Reloading. Dec 13 14:06:04.332936 systemd-tmpfiles[1410]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 14:06:04.374888 /usr/lib/systemd/system-generators/torcx-generator[1430]: time="2024-12-13T14:06:04Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:06:04.374918 /usr/lib/systemd/system-generators/torcx-generator[1430]: time="2024-12-13T14:06:04Z" level=info msg="torcx already run" Dec 13 14:06:04.471266 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:06:04.471588 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:06:04.488985 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:06:04.556188 systemd-tmpfiles[1410]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 14:06:04.561736 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:06:04.563247 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:06:04.569811 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:06:04.575789 systemd[1]: Starting modprobe@loop.service... Dec 13 14:06:04.579979 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:06:04.580239 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:06:04.581170 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:06:04.581443 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:06:04.582310 systemd-tmpfiles[1410]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 14:06:04.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:04.585000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:04.587101 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:06:04.587340 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:06:04.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:04.591000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:04.593321 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:06:04.593651 systemd[1]: Finished modprobe@loop.service. Dec 13 14:06:04.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:04.597000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:04.600314 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:06:04.601923 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:06:04.607786 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:06:04.613859 systemd[1]: Starting modprobe@loop.service... Dec 13 14:06:04.618413 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:06:04.618757 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:06:04.619726 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:06:04.620033 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:06:04.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:04.623000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:04.625967 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:06:04.626227 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:06:04.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:04.630000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:04.631970 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:06:04.632289 systemd[1]: Finished modprobe@loop.service. Dec 13 14:06:04.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:04.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:04.639562 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:06:04.641021 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:06:04.646758 systemd[1]: Starting modprobe@drm.service... Dec 13 14:06:04.652726 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:06:04.658873 systemd[1]: Starting modprobe@loop.service... Dec 13 14:06:04.663469 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:06:04.663849 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:06:04.664966 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:06:04.665249 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:06:04.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:04.669000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:04.671251 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:06:04.671501 systemd[1]: Finished modprobe@drm.service. Dec 13 14:06:04.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:04.674000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:04.676734 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:06:04.676989 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:06:04.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:04.680000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:04.682408 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:06:04.682744 systemd[1]: Finished modprobe@loop.service. Dec 13 14:06:04.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:04.686000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:04.688006 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:06:04.688158 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:06:04.689507 systemd[1]: Finished ensure-sysext.service. Dec 13 14:06:04.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:04.793552 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 14:06:04.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:04.800151 systemd[1]: Starting audit-rules.service... Dec 13 14:06:04.805255 systemd[1]: Starting clean-ca-certificates.service... Dec 13 14:06:04.810909 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 14:06:04.817466 systemd[1]: Starting systemd-resolved.service... Dec 13 14:06:04.823502 systemd[1]: Starting systemd-timesyncd.service... Dec 13 14:06:04.829047 systemd[1]: Starting systemd-update-utmp.service... Dec 13 14:06:04.834828 systemd[1]: Finished clean-ca-certificates.service. Dec 13 14:06:04.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:04.840129 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:06:04.849000 audit[1530]: SYSTEM_BOOT pid=1530 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 14:06:04.852799 systemd[1]: Finished systemd-update-utmp.service. Dec 13 14:06:04.856000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:04.899526 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 14:06:04.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:04.922439 systemd[1]: Started systemd-timesyncd.service. Dec 13 14:06:04.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:04.927439 systemd[1]: Reached target time-set.target. Dec 13 14:06:04.939461 systemd-resolved[1526]: Positive Trust Anchors: Dec 13 14:06:04.939474 systemd-resolved[1526]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:06:04.939500 systemd-resolved[1526]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:06:04.938000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 14:06:04.938000 audit[1544]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffcc14ff00 a2=420 a3=0 items=0 ppid=1521 pid=1544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:06:04.938000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 14:06:04.940400 augenrules[1544]: No rules Dec 13 14:06:04.941283 systemd[1]: Finished audit-rules.service. Dec 13 14:06:04.948565 systemd-resolved[1526]: Using system hostname 'ci-3510.3.6-a-c740448bc5'. Dec 13 14:06:04.950565 systemd[1]: Started systemd-resolved.service. Dec 13 14:06:04.955626 systemd[1]: Reached target network.target. Dec 13 14:06:04.960524 systemd[1]: Reached target network-online.target. Dec 13 14:06:04.965518 systemd[1]: Reached target nss-lookup.target. Dec 13 14:06:05.140342 systemd-timesyncd[1527]: Contacted time server 173.208.172.164:123 (1.flatcar.pool.ntp.org). Dec 13 14:06:05.140421 systemd-timesyncd[1527]: Initial clock synchronization to Fri 2024-12-13 14:06:05.139330 UTC. Dec 13 14:06:14.347274 ldconfig[1368]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 14:06:14.356564 systemd[1]: Finished ldconfig.service. Dec 13 14:06:14.363063 systemd[1]: Starting systemd-update-done.service... Dec 13 14:06:14.434874 systemd[1]: Finished systemd-update-done.service. Dec 13 14:06:14.440727 systemd[1]: Reached target sysinit.target. Dec 13 14:06:14.446034 systemd[1]: Started motdgen.path. Dec 13 14:06:14.450380 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 14:06:14.457815 systemd[1]: Started logrotate.timer. Dec 13 14:06:14.462502 systemd[1]: Started mdadm.timer. Dec 13 14:06:14.466806 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 14:06:14.472208 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 14:06:14.472242 systemd[1]: Reached target paths.target. Dec 13 14:06:14.477006 systemd[1]: Reached target timers.target. Dec 13 14:06:14.484704 systemd[1]: Listening on dbus.socket. Dec 13 14:06:14.491023 systemd[1]: Starting docker.socket... Dec 13 14:06:14.496959 systemd[1]: Listening on sshd.socket. Dec 13 14:06:14.501689 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:06:14.502219 systemd[1]: Listening on docker.socket. Dec 13 14:06:14.506963 systemd[1]: Reached target sockets.target. Dec 13 14:06:14.511808 systemd[1]: Reached target basic.target. Dec 13 14:06:14.516754 systemd[1]: System is tainted: cgroupsv1 Dec 13 14:06:14.516845 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:06:14.516873 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:06:14.518337 systemd[1]: Starting containerd.service... Dec 13 14:06:14.524020 systemd[1]: Starting dbus.service... Dec 13 14:06:14.529582 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 14:06:14.535947 systemd[1]: Starting extend-filesystems.service... Dec 13 14:06:14.540637 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 14:06:14.542433 systemd[1]: Starting kubelet.service... Dec 13 14:06:14.548544 systemd[1]: Starting motdgen.service... Dec 13 14:06:14.554264 systemd[1]: Started nvidia.service. Dec 13 14:06:14.560780 systemd[1]: Starting prepare-helm.service... Dec 13 14:06:14.566733 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 14:06:14.574276 systemd[1]: Starting sshd-keygen.service... Dec 13 14:06:14.581984 systemd[1]: Starting systemd-logind.service... Dec 13 14:06:14.591858 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:06:14.591970 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 14:06:14.593755 systemd[1]: Starting update-engine.service... Dec 13 14:06:14.599352 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 14:06:14.610013 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 14:06:14.610368 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 14:06:14.651973 jq[1580]: true Dec 13 14:06:14.653283 jq[1559]: false Dec 13 14:06:14.661556 extend-filesystems[1560]: Found loop1 Dec 13 14:06:14.666202 extend-filesystems[1560]: Found sda Dec 13 14:06:14.666202 extend-filesystems[1560]: Found sda1 Dec 13 14:06:14.666202 extend-filesystems[1560]: Found sda2 Dec 13 14:06:14.666202 extend-filesystems[1560]: Found sda3 Dec 13 14:06:14.666202 extend-filesystems[1560]: Found usr Dec 13 14:06:14.666202 extend-filesystems[1560]: Found sda4 Dec 13 14:06:14.666202 extend-filesystems[1560]: Found sda6 Dec 13 14:06:14.666202 extend-filesystems[1560]: Found sda7 Dec 13 14:06:14.666202 extend-filesystems[1560]: Found sda9 Dec 13 14:06:14.666202 extend-filesystems[1560]: Checking size of /dev/sda9 Dec 13 14:06:14.687802 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 14:06:14.688059 systemd[1]: Finished motdgen.service. Dec 13 14:06:14.703346 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 14:06:14.703625 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 14:06:14.734397 systemd-logind[1575]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Dec 13 14:06:14.736795 systemd-logind[1575]: New seat seat0. Dec 13 14:06:14.759688 jq[1597]: true Dec 13 14:06:14.795621 env[1588]: time="2024-12-13T14:06:14.794304656Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 14:06:14.841858 env[1588]: time="2024-12-13T14:06:14.841804457Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 14:06:14.842179 env[1588]: time="2024-12-13T14:06:14.842157445Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:06:14.844014 env[1588]: time="2024-12-13T14:06:14.843967466Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:06:14.844014 env[1588]: time="2024-12-13T14:06:14.844008064Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:06:14.844313 env[1588]: time="2024-12-13T14:06:14.844286415Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:06:14.844313 env[1588]: time="2024-12-13T14:06:14.844311134Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 14:06:14.844384 env[1588]: time="2024-12-13T14:06:14.844326254Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 14:06:14.844384 env[1588]: time="2024-12-13T14:06:14.844336374Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 14:06:14.844428 env[1588]: time="2024-12-13T14:06:14.844407091Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:06:14.845011 env[1588]: time="2024-12-13T14:06:14.844642724Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:06:14.845011 env[1588]: time="2024-12-13T14:06:14.844802718Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:06:14.845011 env[1588]: time="2024-12-13T14:06:14.844819278Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 14:06:14.845011 env[1588]: time="2024-12-13T14:06:14.844875916Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 14:06:14.845011 env[1588]: time="2024-12-13T14:06:14.844887315Z" level=info msg="metadata content store policy set" policy=shared Dec 13 14:06:14.857912 env[1588]: time="2024-12-13T14:06:14.857820731Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 14:06:14.857912 env[1588]: time="2024-12-13T14:06:14.857870849Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 14:06:14.857912 env[1588]: time="2024-12-13T14:06:14.857886889Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 14:06:14.858069 env[1588]: time="2024-12-13T14:06:14.857930447Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 14:06:14.858069 env[1588]: time="2024-12-13T14:06:14.857947407Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 14:06:14.858069 env[1588]: time="2024-12-13T14:06:14.857963246Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 14:06:14.858069 env[1588]: time="2024-12-13T14:06:14.857975646Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 14:06:14.858357 env[1588]: time="2024-12-13T14:06:14.858333954Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 14:06:14.858398 env[1588]: time="2024-12-13T14:06:14.858360073Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 14:06:14.858398 env[1588]: time="2024-12-13T14:06:14.858374513Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 14:06:14.858398 env[1588]: time="2024-12-13T14:06:14.858387952Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 14:06:14.858461 env[1588]: time="2024-12-13T14:06:14.858401552Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 14:06:14.858554 env[1588]: time="2024-12-13T14:06:14.858531508Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 14:06:14.858665 env[1588]: time="2024-12-13T14:06:14.858647504Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 14:06:14.858978 env[1588]: time="2024-12-13T14:06:14.858957414Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 14:06:14.859013 env[1588]: time="2024-12-13T14:06:14.858988013Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 14:06:14.859013 env[1588]: time="2024-12-13T14:06:14.859002052Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 14:06:14.859053 env[1588]: time="2024-12-13T14:06:14.859045051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 14:06:14.859073 env[1588]: time="2024-12-13T14:06:14.859058730Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 14:06:14.859092 env[1588]: time="2024-12-13T14:06:14.859071650Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 14:06:14.859092 env[1588]: time="2024-12-13T14:06:14.859084089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 14:06:14.859135 env[1588]: time="2024-12-13T14:06:14.859096729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 14:06:14.859135 env[1588]: time="2024-12-13T14:06:14.859109369Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 14:06:14.859135 env[1588]: time="2024-12-13T14:06:14.859120888Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 14:06:14.859135 env[1588]: time="2024-12-13T14:06:14.859131848Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 14:06:14.859211 env[1588]: time="2024-12-13T14:06:14.859144687Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 14:06:14.859521 env[1588]: time="2024-12-13T14:06:14.859352441Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 14:06:14.859521 env[1588]: time="2024-12-13T14:06:14.859381920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 14:06:14.859521 env[1588]: time="2024-12-13T14:06:14.859394839Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 14:06:14.859521 env[1588]: time="2024-12-13T14:06:14.859407679Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 14:06:14.859521 env[1588]: time="2024-12-13T14:06:14.859421638Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 14:06:14.859521 env[1588]: time="2024-12-13T14:06:14.859432598Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 14:06:14.859521 env[1588]: time="2024-12-13T14:06:14.859449677Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 14:06:14.859521 env[1588]: time="2024-12-13T14:06:14.859484876Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 14:06:14.860661 env[1588]: time="2024-12-13T14:06:14.859766667Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 14:06:14.860661 env[1588]: time="2024-12-13T14:06:14.859898143Z" level=info msg="Connect containerd service" Dec 13 14:06:14.860661 env[1588]: time="2024-12-13T14:06:14.859947501Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 14:06:14.860661 env[1588]: time="2024-12-13T14:06:14.860526122Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:06:14.865538 env[1588]: time="2024-12-13T14:06:14.860791513Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 14:06:14.865566 tar[1583]: linux-arm64/helm Dec 13 14:06:14.868733 extend-filesystems[1560]: Old size kept for /dev/sda9 Dec 13 14:06:14.873810 extend-filesystems[1560]: Found sr0 Dec 13 14:06:14.874033 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 14:06:14.874327 systemd[1]: Finished extend-filesystems.service. Dec 13 14:06:14.896630 env[1588]: time="2024-12-13T14:06:14.860827592Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 14:06:14.896630 env[1588]: time="2024-12-13T14:06:14.885631618Z" level=info msg="containerd successfully booted in 0.104450s" Dec 13 14:06:14.885965 systemd[1]: Started containerd.service. Dec 13 14:06:14.912129 env[1588]: time="2024-12-13T14:06:14.911556167Z" level=info msg="Start subscribing containerd event" Dec 13 14:06:14.912129 env[1588]: time="2024-12-13T14:06:14.911651924Z" level=info msg="Start recovering state" Dec 13 14:06:14.912129 env[1588]: time="2024-12-13T14:06:14.911737281Z" level=info msg="Start event monitor" Dec 13 14:06:14.912129 env[1588]: time="2024-12-13T14:06:14.911770480Z" level=info msg="Start snapshots syncer" Dec 13 14:06:14.912129 env[1588]: time="2024-12-13T14:06:14.911782279Z" level=info msg="Start cni network conf syncer for default" Dec 13 14:06:14.912129 env[1588]: time="2024-12-13T14:06:14.911789639Z" level=info msg="Start streaming server" Dec 13 14:06:14.975311 bash[1630]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:06:14.976275 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 14:06:15.000033 dbus-daemon[1558]: [system] SELinux support is enabled Dec 13 14:06:15.000225 systemd[1]: Started dbus.service. Dec 13 14:06:15.006115 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 14:06:15.006147 systemd[1]: Reached target system-config.target. Dec 13 14:06:15.015039 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 14:06:15.015067 systemd[1]: Reached target user-config.target. Dec 13 14:06:15.028650 systemd[1]: Started systemd-logind.service. Dec 13 14:06:15.033484 dbus-daemon[1558]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 14:06:15.160264 systemd[1]: nvidia.service: Deactivated successfully. Dec 13 14:06:15.463333 systemd[1]: Started kubelet.service. Dec 13 14:06:15.522639 tar[1583]: linux-arm64/LICENSE Dec 13 14:06:15.522897 tar[1583]: linux-arm64/README.md Dec 13 14:06:15.531056 systemd[1]: Finished prepare-helm.service. Dec 13 14:06:15.860266 update_engine[1579]: I1213 14:06:15.825037 1579 main.cc:92] Flatcar Update Engine starting Dec 13 14:06:15.949921 kubelet[1674]: E1213 14:06:15.949834 1674 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:06:15.951831 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:06:15.951970 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:06:15.955686 systemd[1]: Started update-engine.service. Dec 13 14:06:15.955995 update_engine[1579]: I1213 14:06:15.955749 1579 update_check_scheduler.cc:74] Next update check in 4m47s Dec 13 14:06:15.962407 systemd[1]: Started locksmithd.service. Dec 13 14:06:17.934769 sshd_keygen[1578]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 14:06:17.953135 systemd[1]: Finished sshd-keygen.service. Dec 13 14:06:17.959631 systemd[1]: Starting issuegen.service... Dec 13 14:06:17.964811 systemd[1]: Started waagent.service. Dec 13 14:06:17.969728 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 14:06:17.969991 systemd[1]: Finished issuegen.service. Dec 13 14:06:17.976243 systemd[1]: Starting systemd-user-sessions.service... Dec 13 14:06:17.990073 systemd[1]: Finished systemd-user-sessions.service. Dec 13 14:06:17.998594 systemd[1]: Started getty@tty1.service. Dec 13 14:06:18.004811 systemd[1]: Started serial-getty@ttyAMA0.service. Dec 13 14:06:18.015094 systemd[1]: Reached target getty.target. Dec 13 14:06:18.019491 systemd[1]: Reached target multi-user.target. Dec 13 14:06:18.025949 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 14:06:18.046010 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 14:06:18.046267 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 14:06:18.052148 systemd[1]: Startup finished in 18.888s (kernel) + 37.669s (userspace) = 56.557s. Dec 13 14:06:18.116212 locksmithd[1686]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 14:06:18.205099 login[1708]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 14:06:18.205638 login[1707]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 14:06:18.221401 systemd[1]: Created slice user-500.slice. Dec 13 14:06:18.222389 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 14:06:18.224840 systemd-logind[1575]: New session 2 of user core. Dec 13 14:06:18.228546 systemd-logind[1575]: New session 1 of user core. Dec 13 14:06:18.241760 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 14:06:18.243146 systemd[1]: Starting user@500.service... Dec 13 14:06:18.255483 (systemd)[1714]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:06:18.369416 systemd[1714]: Queued start job for default target default.target. Dec 13 14:06:18.370076 systemd[1714]: Reached target paths.target. Dec 13 14:06:18.370107 systemd[1714]: Reached target sockets.target. Dec 13 14:06:18.370118 systemd[1714]: Reached target timers.target. Dec 13 14:06:18.370128 systemd[1714]: Reached target basic.target. Dec 13 14:06:18.370178 systemd[1714]: Reached target default.target. Dec 13 14:06:18.370199 systemd[1714]: Startup finished in 108ms. Dec 13 14:06:18.370263 systemd[1]: Started user@500.service. Dec 13 14:06:18.371248 systemd[1]: Started session-1.scope. Dec 13 14:06:18.371805 systemd[1]: Started session-2.scope. Dec 13 14:06:23.812545 waagent[1702]: 2024-12-13T14:06:23.812432Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Dec 13 14:06:23.818943 waagent[1702]: 2024-12-13T14:06:23.818862Z INFO Daemon Daemon OS: flatcar 3510.3.6 Dec 13 14:06:23.823572 waagent[1702]: 2024-12-13T14:06:23.823507Z INFO Daemon Daemon Python: 3.9.16 Dec 13 14:06:23.828039 waagent[1702]: 2024-12-13T14:06:23.827863Z INFO Daemon Daemon Run daemon Dec 13 14:06:23.832246 waagent[1702]: 2024-12-13T14:06:23.832184Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.6' Dec 13 14:06:23.849126 waagent[1702]: 2024-12-13T14:06:23.848972Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Dec 13 14:06:23.865177 waagent[1702]: 2024-12-13T14:06:23.865017Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Dec 13 14:06:23.875066 waagent[1702]: 2024-12-13T14:06:23.874979Z INFO Daemon Daemon cloud-init is enabled: False Dec 13 14:06:23.880220 waagent[1702]: 2024-12-13T14:06:23.880139Z INFO Daemon Daemon Using waagent for provisioning Dec 13 14:06:23.886185 waagent[1702]: 2024-12-13T14:06:23.886108Z INFO Daemon Daemon Activate resource disk Dec 13 14:06:23.890962 waagent[1702]: 2024-12-13T14:06:23.890888Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Dec 13 14:06:23.904968 waagent[1702]: 2024-12-13T14:06:23.904879Z INFO Daemon Daemon Found device: None Dec 13 14:06:23.909458 waagent[1702]: 2024-12-13T14:06:23.909382Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Dec 13 14:06:23.917757 waagent[1702]: 2024-12-13T14:06:23.917675Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Dec 13 14:06:23.929233 waagent[1702]: 2024-12-13T14:06:23.929162Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 13 14:06:23.935000 waagent[1702]: 2024-12-13T14:06:23.934922Z INFO Daemon Daemon Running default provisioning handler Dec 13 14:06:23.949673 waagent[1702]: 2024-12-13T14:06:23.949482Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Dec 13 14:06:23.966091 waagent[1702]: 2024-12-13T14:06:23.965918Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Dec 13 14:06:23.976242 waagent[1702]: 2024-12-13T14:06:23.976156Z INFO Daemon Daemon cloud-init is enabled: False Dec 13 14:06:23.981533 waagent[1702]: 2024-12-13T14:06:23.981455Z INFO Daemon Daemon Copying ovf-env.xml Dec 13 14:06:24.094511 waagent[1702]: 2024-12-13T14:06:24.090559Z INFO Daemon Daemon Successfully mounted dvd Dec 13 14:06:24.141962 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Dec 13 14:06:24.215259 waagent[1702]: 2024-12-13T14:06:24.215117Z INFO Daemon Daemon Detect protocol endpoint Dec 13 14:06:24.220238 waagent[1702]: 2024-12-13T14:06:24.220141Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 13 14:06:24.226379 waagent[1702]: 2024-12-13T14:06:24.226288Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Dec 13 14:06:24.233254 waagent[1702]: 2024-12-13T14:06:24.233160Z INFO Daemon Daemon Test for route to 168.63.129.16 Dec 13 14:06:24.238790 waagent[1702]: 2024-12-13T14:06:24.238706Z INFO Daemon Daemon Route to 168.63.129.16 exists Dec 13 14:06:24.244069 waagent[1702]: 2024-12-13T14:06:24.243978Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Dec 13 14:06:24.479593 waagent[1702]: 2024-12-13T14:06:24.479523Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Dec 13 14:06:24.487170 waagent[1702]: 2024-12-13T14:06:24.487118Z INFO Daemon Daemon Wire protocol version:2012-11-30 Dec 13 14:06:24.492752 waagent[1702]: 2024-12-13T14:06:24.492661Z INFO Daemon Daemon Server preferred version:2015-04-05 Dec 13 14:06:24.990297 waagent[1702]: 2024-12-13T14:06:24.990145Z INFO Daemon Daemon Initializing goal state during protocol detection Dec 13 14:06:25.005373 waagent[1702]: 2024-12-13T14:06:25.005294Z INFO Daemon Daemon Forcing an update of the goal state.. Dec 13 14:06:25.011059 waagent[1702]: 2024-12-13T14:06:25.010975Z INFO Daemon Daemon Fetching goal state [incarnation 1] Dec 13 14:06:25.127813 waagent[1702]: 2024-12-13T14:06:25.127679Z INFO Daemon Daemon Found private key matching thumbprint D50069850D28EB9CCA7E4D4C35A58ACE7923BE8E Dec 13 14:06:25.136068 waagent[1702]: 2024-12-13T14:06:25.135980Z INFO Daemon Daemon Certificate with thumbprint E31F056E86498D8242F1445A24628E02F5F0117A has no matching private key. Dec 13 14:06:25.146005 waagent[1702]: 2024-12-13T14:06:25.145910Z INFO Daemon Daemon Fetch goal state completed Dec 13 14:06:25.194475 waagent[1702]: 2024-12-13T14:06:25.194417Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 5b8a3be5-65c8-409f-b626-c317076281e1 New eTag: 1374945515280818203] Dec 13 14:06:25.205301 waagent[1702]: 2024-12-13T14:06:25.205209Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Dec 13 14:06:25.226107 waagent[1702]: 2024-12-13T14:06:25.226023Z INFO Daemon Daemon Starting provisioning Dec 13 14:06:25.231209 waagent[1702]: 2024-12-13T14:06:25.231109Z INFO Daemon Daemon Handle ovf-env.xml. Dec 13 14:06:25.236011 waagent[1702]: 2024-12-13T14:06:25.235925Z INFO Daemon Daemon Set hostname [ci-3510.3.6-a-c740448bc5] Dec 13 14:06:25.382726 waagent[1702]: 2024-12-13T14:06:25.382568Z INFO Daemon Daemon Publish hostname [ci-3510.3.6-a-c740448bc5] Dec 13 14:06:25.389510 waagent[1702]: 2024-12-13T14:06:25.389411Z INFO Daemon Daemon Examine /proc/net/route for primary interface Dec 13 14:06:25.395985 waagent[1702]: 2024-12-13T14:06:25.395898Z INFO Daemon Daemon Primary interface is [eth0] Dec 13 14:06:25.412869 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Dec 13 14:06:25.413090 systemd[1]: Stopped systemd-networkd-wait-online.service. Dec 13 14:06:25.413148 systemd[1]: Stopping systemd-networkd-wait-online.service... Dec 13 14:06:25.413337 systemd[1]: Stopping systemd-networkd.service... Dec 13 14:06:25.422651 systemd-networkd[1305]: eth0: DHCPv6 lease lost Dec 13 14:06:25.423970 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 14:06:25.424219 systemd[1]: Stopped systemd-networkd.service. Dec 13 14:06:25.426174 systemd[1]: Starting systemd-networkd.service... Dec 13 14:06:25.461007 systemd-networkd[1760]: enP45858s1: Link UP Dec 13 14:06:25.461017 systemd-networkd[1760]: enP45858s1: Gained carrier Dec 13 14:06:25.462200 systemd-networkd[1760]: eth0: Link UP Dec 13 14:06:25.462212 systemd-networkd[1760]: eth0: Gained carrier Dec 13 14:06:25.462549 systemd-networkd[1760]: lo: Link UP Dec 13 14:06:25.462559 systemd-networkd[1760]: lo: Gained carrier Dec 13 14:06:25.462809 systemd-networkd[1760]: eth0: Gained IPv6LL Dec 13 14:06:25.464054 systemd-networkd[1760]: Enumeration completed Dec 13 14:06:25.464197 systemd[1]: Started systemd-networkd.service. Dec 13 14:06:25.466131 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 14:06:25.467750 waagent[1702]: 2024-12-13T14:06:25.467412Z INFO Daemon Daemon Create user account if not exists Dec 13 14:06:25.473772 systemd-networkd[1760]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:06:25.473902 waagent[1702]: 2024-12-13T14:06:25.473804Z INFO Daemon Daemon User core already exists, skip useradd Dec 13 14:06:25.480576 waagent[1702]: 2024-12-13T14:06:25.480442Z INFO Daemon Daemon Configure sudoer Dec 13 14:06:25.485585 waagent[1702]: 2024-12-13T14:06:25.485499Z INFO Daemon Daemon Configure sshd Dec 13 14:06:25.490072 waagent[1702]: 2024-12-13T14:06:25.489977Z INFO Daemon Daemon Deploy ssh public key. Dec 13 14:06:25.500532 systemd-networkd[1760]: eth0: DHCPv4 address 10.200.20.36/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 13 14:06:25.505023 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 14:06:26.198804 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 14:06:26.198974 systemd[1]: Stopped kubelet.service. Dec 13 14:06:26.200468 systemd[1]: Starting kubelet.service... Dec 13 14:06:26.283227 systemd[1]: Started kubelet.service. Dec 13 14:06:26.388808 kubelet[1778]: E1213 14:06:26.388751 1778 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:06:26.391493 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:06:26.391650 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:06:26.763538 waagent[1702]: 2024-12-13T14:06:26.757882Z INFO Daemon Daemon Provisioning complete Dec 13 14:06:26.786175 waagent[1702]: 2024-12-13T14:06:26.786109Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Dec 13 14:06:26.793274 waagent[1702]: 2024-12-13T14:06:26.793186Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Dec 13 14:06:26.803483 waagent[1702]: 2024-12-13T14:06:26.803402Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Dec 13 14:06:27.120500 waagent[1786]: 2024-12-13T14:06:27.120340Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Dec 13 14:06:27.121759 waagent[1786]: 2024-12-13T14:06:27.121693Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 14:06:27.122028 waagent[1786]: 2024-12-13T14:06:27.121977Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 14:06:27.139546 waagent[1786]: 2024-12-13T14:06:27.139450Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Dec 13 14:06:27.139922 waagent[1786]: 2024-12-13T14:06:27.139867Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Dec 13 14:06:27.213652 waagent[1786]: 2024-12-13T14:06:27.213510Z INFO ExtHandler ExtHandler Found private key matching thumbprint D50069850D28EB9CCA7E4D4C35A58ACE7923BE8E Dec 13 14:06:27.214023 waagent[1786]: 2024-12-13T14:06:27.213971Z INFO ExtHandler ExtHandler Certificate with thumbprint E31F056E86498D8242F1445A24628E02F5F0117A has no matching private key. Dec 13 14:06:27.214337 waagent[1786]: 2024-12-13T14:06:27.214287Z INFO ExtHandler ExtHandler Fetch goal state completed Dec 13 14:06:27.227908 waagent[1786]: 2024-12-13T14:06:27.227855Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 0f7f753b-cbf4-4819-a068-46467de61286 New eTag: 1374945515280818203] Dec 13 14:06:27.228619 waagent[1786]: 2024-12-13T14:06:27.228553Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Dec 13 14:06:27.760972 waagent[1786]: 2024-12-13T14:06:27.760827Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.6; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Dec 13 14:06:27.772663 waagent[1786]: 2024-12-13T14:06:27.772555Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1786 Dec 13 14:06:27.776933 waagent[1786]: 2024-12-13T14:06:27.776861Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.6', '', 'Flatcar Container Linux by Kinvolk'] Dec 13 14:06:27.778450 waagent[1786]: 2024-12-13T14:06:27.778389Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Dec 13 14:06:28.030076 waagent[1786]: 2024-12-13T14:06:28.029965Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Dec 13 14:06:28.030712 waagent[1786]: 2024-12-13T14:06:28.030650Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Dec 13 14:06:28.038931 waagent[1786]: 2024-12-13T14:06:28.038871Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Dec 13 14:06:28.039656 waagent[1786]: 2024-12-13T14:06:28.039578Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Dec 13 14:06:28.040981 waagent[1786]: 2024-12-13T14:06:28.040918Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Dec 13 14:06:28.042509 waagent[1786]: 2024-12-13T14:06:28.042433Z INFO ExtHandler ExtHandler Starting env monitor service. Dec 13 14:06:28.042850 waagent[1786]: 2024-12-13T14:06:28.042778Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 14:06:28.043399 waagent[1786]: 2024-12-13T14:06:28.043327Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 14:06:28.044039 waagent[1786]: 2024-12-13T14:06:28.043975Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Dec 13 14:06:28.044370 waagent[1786]: 2024-12-13T14:06:28.044311Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Dec 13 14:06:28.044370 waagent[1786]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Dec 13 14:06:28.044370 waagent[1786]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Dec 13 14:06:28.044370 waagent[1786]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Dec 13 14:06:28.044370 waagent[1786]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Dec 13 14:06:28.044370 waagent[1786]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 14:06:28.044370 waagent[1786]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 14:06:28.047017 waagent[1786]: 2024-12-13T14:06:28.046833Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Dec 13 14:06:28.047414 waagent[1786]: 2024-12-13T14:06:28.047332Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 14:06:28.048255 waagent[1786]: 2024-12-13T14:06:28.048179Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 14:06:28.048942 waagent[1786]: 2024-12-13T14:06:28.048868Z INFO EnvHandler ExtHandler Configure routes Dec 13 14:06:28.049107 waagent[1786]: 2024-12-13T14:06:28.049061Z INFO EnvHandler ExtHandler Gateway:None Dec 13 14:06:28.049231 waagent[1786]: 2024-12-13T14:06:28.049188Z INFO EnvHandler ExtHandler Routes:None Dec 13 14:06:28.050260 waagent[1786]: 2024-12-13T14:06:28.050195Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Dec 13 14:06:28.050425 waagent[1786]: 2024-12-13T14:06:28.050354Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Dec 13 14:06:28.051262 waagent[1786]: 2024-12-13T14:06:28.051166Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Dec 13 14:06:28.051457 waagent[1786]: 2024-12-13T14:06:28.051387Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Dec 13 14:06:28.051786 waagent[1786]: 2024-12-13T14:06:28.051718Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Dec 13 14:06:28.062510 waagent[1786]: 2024-12-13T14:06:28.062432Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Dec 13 14:06:28.064766 waagent[1786]: 2024-12-13T14:06:28.064685Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Dec 13 14:06:28.066130 waagent[1786]: 2024-12-13T14:06:28.066068Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Dec 13 14:06:28.134161 waagent[1786]: 2024-12-13T14:06:28.134071Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1760' Dec 13 14:06:28.138023 waagent[1786]: 2024-12-13T14:06:28.137946Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Dec 13 14:06:28.320480 waagent[1786]: 2024-12-13T14:06:28.320348Z INFO MonitorHandler ExtHandler Network interfaces: Dec 13 14:06:28.320480 waagent[1786]: Executing ['ip', '-a', '-o', 'link']: Dec 13 14:06:28.320480 waagent[1786]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Dec 13 14:06:28.320480 waagent[1786]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:c2:b4:9d brd ff:ff:ff:ff:ff:ff Dec 13 14:06:28.320480 waagent[1786]: 3: enP45858s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:c2:b4:9d brd ff:ff:ff:ff:ff:ff\ altname enP45858p0s2 Dec 13 14:06:28.320480 waagent[1786]: Executing ['ip', '-4', '-a', '-o', 'address']: Dec 13 14:06:28.320480 waagent[1786]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Dec 13 14:06:28.320480 waagent[1786]: 2: eth0 inet 10.200.20.36/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Dec 13 14:06:28.320480 waagent[1786]: Executing ['ip', '-6', '-a', '-o', 'address']: Dec 13 14:06:28.320480 waagent[1786]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Dec 13 14:06:28.320480 waagent[1786]: 2: eth0 inet6 fe80::20d:3aff:fec2:b49d/64 scope link \ valid_lft forever preferred_lft forever Dec 13 14:06:28.548319 waagent[1786]: 2024-12-13T14:06:28.548251Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.12.0.2 -- exiting Dec 13 14:06:28.807218 waagent[1702]: 2024-12-13T14:06:28.807095Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Dec 13 14:06:28.813603 waagent[1702]: 2024-12-13T14:06:28.813533Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.12.0.2 to be the latest agent Dec 13 14:06:30.050254 waagent[1819]: 2024-12-13T14:06:30.050149Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.2) Dec 13 14:06:30.051428 waagent[1819]: 2024-12-13T14:06:30.051360Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.6 Dec 13 14:06:30.051697 waagent[1819]: 2024-12-13T14:06:30.051647Z INFO ExtHandler ExtHandler Python: 3.9.16 Dec 13 14:06:30.051911 waagent[1819]: 2024-12-13T14:06:30.051865Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Dec 13 14:06:30.060341 waagent[1819]: 2024-12-13T14:06:30.060177Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.6; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Dec 13 14:06:30.061023 waagent[1819]: 2024-12-13T14:06:30.060963Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 14:06:30.061283 waagent[1819]: 2024-12-13T14:06:30.061236Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 14:06:30.075419 waagent[1819]: 2024-12-13T14:06:30.075324Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 13 14:06:30.093201 waagent[1819]: 2024-12-13T14:06:30.093134Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Dec 13 14:06:30.094505 waagent[1819]: 2024-12-13T14:06:30.094445Z INFO ExtHandler Dec 13 14:06:30.094793 waagent[1819]: 2024-12-13T14:06:30.094742Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: e62506fe-eef5-41b0-9883-0a7edbced03c eTag: 1374945515280818203 source: Fabric] Dec 13 14:06:30.095679 waagent[1819]: 2024-12-13T14:06:30.095620Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Dec 13 14:06:30.097061 waagent[1819]: 2024-12-13T14:06:30.097003Z INFO ExtHandler Dec 13 14:06:30.097280 waagent[1819]: 2024-12-13T14:06:30.097233Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Dec 13 14:06:30.104293 waagent[1819]: 2024-12-13T14:06:30.104235Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Dec 13 14:06:30.104996 waagent[1819]: 2024-12-13T14:06:30.104952Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Dec 13 14:06:30.125134 waagent[1819]: 2024-12-13T14:06:30.125065Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Dec 13 14:06:30.197650 waagent[1819]: 2024-12-13T14:06:30.197482Z INFO ExtHandler Downloaded certificate {'thumbprint': 'E31F056E86498D8242F1445A24628E02F5F0117A', 'hasPrivateKey': False} Dec 13 14:06:30.202217 waagent[1819]: 2024-12-13T14:06:30.202129Z INFO ExtHandler Downloaded certificate {'thumbprint': 'D50069850D28EB9CCA7E4D4C35A58ACE7923BE8E', 'hasPrivateKey': True} Dec 13 14:06:30.203485 waagent[1819]: 2024-12-13T14:06:30.203418Z INFO ExtHandler Fetch goal state completed Dec 13 14:06:30.225889 waagent[1819]: 2024-12-13T14:06:30.225759Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.0.15 3 Sep 2024 (Library: OpenSSL 3.0.15 3 Sep 2024) Dec 13 14:06:30.239353 waagent[1819]: 2024-12-13T14:06:30.239236Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.2 running as process 1819 Dec 13 14:06:30.243031 waagent[1819]: 2024-12-13T14:06:30.242946Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '3510.3.6', '', 'Flatcar Container Linux by Kinvolk'] Dec 13 14:06:30.244366 waagent[1819]: 2024-12-13T14:06:30.244305Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '3510.3.6', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Dec 13 14:06:30.244819 waagent[1819]: 2024-12-13T14:06:30.244763Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Dec 13 14:06:30.247204 waagent[1819]: 2024-12-13T14:06:30.247143Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Dec 13 14:06:30.252702 waagent[1819]: 2024-12-13T14:06:30.252636Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Dec 13 14:06:30.253297 waagent[1819]: 2024-12-13T14:06:30.253239Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Dec 13 14:06:30.262479 waagent[1819]: 2024-12-13T14:06:30.262418Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Dec 13 14:06:30.263249 waagent[1819]: 2024-12-13T14:06:30.263189Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Dec 13 14:06:30.270246 waagent[1819]: 2024-12-13T14:06:30.270118Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Dec 13 14:06:30.271769 waagent[1819]: 2024-12-13T14:06:30.271682Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Dec 13 14:06:30.273952 waagent[1819]: 2024-12-13T14:06:30.273872Z INFO ExtHandler ExtHandler Starting env monitor service. Dec 13 14:06:30.274244 waagent[1819]: 2024-12-13T14:06:30.274173Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 14:06:30.275072 waagent[1819]: 2024-12-13T14:06:30.274995Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 14:06:30.275813 waagent[1819]: 2024-12-13T14:06:30.275738Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Dec 13 14:06:30.276167 waagent[1819]: 2024-12-13T14:06:30.276105Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Dec 13 14:06:30.276167 waagent[1819]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Dec 13 14:06:30.276167 waagent[1819]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Dec 13 14:06:30.276167 waagent[1819]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Dec 13 14:06:30.276167 waagent[1819]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Dec 13 14:06:30.276167 waagent[1819]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 14:06:30.276167 waagent[1819]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 14:06:30.278780 waagent[1819]: 2024-12-13T14:06:30.278650Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Dec 13 14:06:30.279450 waagent[1819]: 2024-12-13T14:06:30.279372Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 14:06:30.280661 waagent[1819]: 2024-12-13T14:06:30.279968Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 14:06:30.283154 waagent[1819]: 2024-12-13T14:06:30.283024Z INFO EnvHandler ExtHandler Configure routes Dec 13 14:06:30.283391 waagent[1819]: 2024-12-13T14:06:30.283311Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Dec 13 14:06:30.283777 waagent[1819]: 2024-12-13T14:06:30.283708Z INFO EnvHandler ExtHandler Gateway:None Dec 13 14:06:30.284409 waagent[1819]: 2024-12-13T14:06:30.284326Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Dec 13 14:06:30.285168 waagent[1819]: 2024-12-13T14:06:30.285099Z INFO EnvHandler ExtHandler Routes:None Dec 13 14:06:30.288592 waagent[1819]: 2024-12-13T14:06:30.287505Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Dec 13 14:06:30.288945 waagent[1819]: 2024-12-13T14:06:30.288878Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Dec 13 14:06:30.289742 waagent[1819]: 2024-12-13T14:06:30.289663Z INFO MonitorHandler ExtHandler Network interfaces: Dec 13 14:06:30.289742 waagent[1819]: Executing ['ip', '-a', '-o', 'link']: Dec 13 14:06:30.289742 waagent[1819]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Dec 13 14:06:30.289742 waagent[1819]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:c2:b4:9d brd ff:ff:ff:ff:ff:ff Dec 13 14:06:30.289742 waagent[1819]: 3: enP45858s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:c2:b4:9d brd ff:ff:ff:ff:ff:ff\ altname enP45858p0s2 Dec 13 14:06:30.289742 waagent[1819]: Executing ['ip', '-4', '-a', '-o', 'address']: Dec 13 14:06:30.289742 waagent[1819]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Dec 13 14:06:30.289742 waagent[1819]: 2: eth0 inet 10.200.20.36/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Dec 13 14:06:30.289742 waagent[1819]: Executing ['ip', '-6', '-a', '-o', 'address']: Dec 13 14:06:30.289742 waagent[1819]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Dec 13 14:06:30.289742 waagent[1819]: 2: eth0 inet6 fe80::20d:3aff:fec2:b49d/64 scope link \ valid_lft forever preferred_lft forever Dec 13 14:06:30.300910 waagent[1819]: 2024-12-13T14:06:30.300768Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Dec 13 14:06:30.311039 waagent[1819]: 2024-12-13T14:06:30.310955Z INFO ExtHandler ExtHandler Downloading agent manifest Dec 13 14:06:30.340096 waagent[1819]: 2024-12-13T14:06:30.339994Z INFO ExtHandler ExtHandler Dec 13 14:06:30.344934 waagent[1819]: 2024-12-13T14:06:30.344840Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 61648127-7673-4df2-b731-06785de7af59 correlation 5fb01de9-e48c-42d1-ba77-d01f1500b63a created: 2024-12-13T14:04:08.462234Z] Dec 13 14:06:30.354922 waagent[1819]: 2024-12-13T14:06:30.354840Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Dec 13 14:06:30.364865 waagent[1819]: 2024-12-13T14:06:30.364757Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 24 ms] Dec 13 14:06:30.394096 waagent[1819]: 2024-12-13T14:06:30.394026Z INFO ExtHandler ExtHandler Looking for existing remote access users. Dec 13 14:06:30.419452 waagent[1819]: 2024-12-13T14:06:30.419359Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.2 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: D4DC03B7-A8BB-447E-B8A4-3CB81E3CBE10;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1;UpdateMode: SelfUpdate;] Dec 13 14:06:30.471452 waagent[1819]: 2024-12-13T14:06:30.471326Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Dec 13 14:06:30.471452 waagent[1819]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 14:06:30.471452 waagent[1819]: pkts bytes target prot opt in out source destination Dec 13 14:06:30.471452 waagent[1819]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 13 14:06:30.471452 waagent[1819]: pkts bytes target prot opt in out source destination Dec 13 14:06:30.471452 waagent[1819]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 14:06:30.471452 waagent[1819]: pkts bytes target prot opt in out source destination Dec 13 14:06:30.471452 waagent[1819]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 13 14:06:30.471452 waagent[1819]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 13 14:06:30.471452 waagent[1819]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 13 14:06:30.480771 waagent[1819]: 2024-12-13T14:06:30.480652Z INFO EnvHandler ExtHandler Current Firewall rules: Dec 13 14:06:30.480771 waagent[1819]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 14:06:30.480771 waagent[1819]: pkts bytes target prot opt in out source destination Dec 13 14:06:30.480771 waagent[1819]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 13 14:06:30.480771 waagent[1819]: pkts bytes target prot opt in out source destination Dec 13 14:06:30.480771 waagent[1819]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 14:06:30.480771 waagent[1819]: pkts bytes target prot opt in out source destination Dec 13 14:06:30.480771 waagent[1819]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 13 14:06:30.480771 waagent[1819]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 13 14:06:30.480771 waagent[1819]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 13 14:06:30.481670 waagent[1819]: 2024-12-13T14:06:30.481621Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Dec 13 14:06:36.448765 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 14:06:36.448941 systemd[1]: Stopped kubelet.service. Dec 13 14:06:36.450489 systemd[1]: Starting kubelet.service... Dec 13 14:06:36.529773 systemd[1]: Started kubelet.service. Dec 13 14:06:36.576480 kubelet[1877]: E1213 14:06:36.576423 1877 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:06:36.578591 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:06:36.578748 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:06:46.698811 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 14:06:46.698989 systemd[1]: Stopped kubelet.service. Dec 13 14:06:46.700427 systemd[1]: Starting kubelet.service... Dec 13 14:06:46.779341 systemd[1]: Started kubelet.service. Dec 13 14:06:46.867487 kubelet[1893]: E1213 14:06:46.867426 1893 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:06:46.869449 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:06:46.869593 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:06:48.030086 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Dec 13 14:06:56.948832 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 13 14:06:56.949005 systemd[1]: Stopped kubelet.service. Dec 13 14:06:56.950547 systemd[1]: Starting kubelet.service... Dec 13 14:06:57.028846 systemd[1]: Started kubelet.service. Dec 13 14:06:57.090909 kubelet[1907]: E1213 14:06:57.090842 1907 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:06:57.093018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:06:57.093162 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:07:01.509693 update_engine[1579]: I1213 14:07:01.509649 1579 update_attempter.cc:509] Updating boot flags... Dec 13 14:07:07.198806 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Dec 13 14:07:07.198984 systemd[1]: Stopped kubelet.service. Dec 13 14:07:07.200478 systemd[1]: Starting kubelet.service... Dec 13 14:07:07.386086 systemd[1]: Started kubelet.service. Dec 13 14:07:07.438494 kubelet[1962]: E1213 14:07:07.438425 1962 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:07:07.440545 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:07:07.440705 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:07:15.774392 systemd[1]: Created slice system-sshd.slice. Dec 13 14:07:15.775581 systemd[1]: Started sshd@0-10.200.20.36:22-10.200.16.10:41170.service. Dec 13 14:07:16.231539 sshd[1970]: Accepted publickey for core from 10.200.16.10 port 41170 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:07:16.236819 sshd[1970]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:07:16.241330 systemd[1]: Started session-3.scope. Dec 13 14:07:16.241692 systemd-logind[1575]: New session 3 of user core. Dec 13 14:07:16.596353 systemd[1]: Started sshd@1-10.200.20.36:22-10.200.16.10:41186.service. Dec 13 14:07:17.009817 sshd[1975]: Accepted publickey for core from 10.200.16.10 port 41186 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:07:17.011440 sshd[1975]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:07:17.015584 systemd[1]: Started session-4.scope. Dec 13 14:07:17.016264 systemd-logind[1575]: New session 4 of user core. Dec 13 14:07:17.314622 sshd[1975]: pam_unix(sshd:session): session closed for user core Dec 13 14:07:17.317225 systemd[1]: sshd@1-10.200.20.36:22-10.200.16.10:41186.service: Deactivated successfully. Dec 13 14:07:17.318013 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 14:07:17.319140 systemd-logind[1575]: Session 4 logged out. Waiting for processes to exit. Dec 13 14:07:17.319876 systemd-logind[1575]: Removed session 4. Dec 13 14:07:17.381437 systemd[1]: Started sshd@2-10.200.20.36:22-10.200.16.10:41190.service. Dec 13 14:07:17.448972 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Dec 13 14:07:17.449183 systemd[1]: Stopped kubelet.service. Dec 13 14:07:17.451132 systemd[1]: Starting kubelet.service... Dec 13 14:07:17.736065 systemd[1]: Started kubelet.service. Dec 13 14:07:17.782362 kubelet[1992]: E1213 14:07:17.782316 1992 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:07:17.784275 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:07:17.784414 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:07:17.793920 sshd[1982]: Accepted publickey for core from 10.200.16.10 port 41190 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:07:17.795187 sshd[1982]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:07:17.799433 systemd[1]: Started session-5.scope. Dec 13 14:07:17.799646 systemd-logind[1575]: New session 5 of user core. Dec 13 14:07:18.093643 sshd[1982]: pam_unix(sshd:session): session closed for user core Dec 13 14:07:18.097346 systemd[1]: sshd@2-10.200.20.36:22-10.200.16.10:41190.service: Deactivated successfully. Dec 13 14:07:18.098573 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 14:07:18.099153 systemd-logind[1575]: Session 5 logged out. Waiting for processes to exit. Dec 13 14:07:18.099974 systemd-logind[1575]: Removed session 5. Dec 13 14:07:18.162758 systemd[1]: Started sshd@3-10.200.20.36:22-10.200.16.10:41206.service. Dec 13 14:07:18.590359 sshd[2004]: Accepted publickey for core from 10.200.16.10 port 41206 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:07:18.592151 sshd[2004]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:07:18.597399 systemd[1]: Started session-6.scope. Dec 13 14:07:18.598478 systemd-logind[1575]: New session 6 of user core. Dec 13 14:07:18.915029 sshd[2004]: pam_unix(sshd:session): session closed for user core Dec 13 14:07:18.917748 systemd-logind[1575]: Session 6 logged out. Waiting for processes to exit. Dec 13 14:07:18.918920 systemd[1]: sshd@3-10.200.20.36:22-10.200.16.10:41206.service: Deactivated successfully. Dec 13 14:07:18.919680 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 14:07:18.920447 systemd-logind[1575]: Removed session 6. Dec 13 14:07:18.981560 systemd[1]: Started sshd@4-10.200.20.36:22-10.200.16.10:51008.service. Dec 13 14:07:19.390512 sshd[2011]: Accepted publickey for core from 10.200.16.10 port 51008 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:07:19.392142 sshd[2011]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:07:19.396400 systemd[1]: Started session-7.scope. Dec 13 14:07:19.397054 systemd-logind[1575]: New session 7 of user core. Dec 13 14:07:19.684235 sudo[2015]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 14:07:19.684463 sudo[2015]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:07:19.703818 dbus-daemon[1558]: avc: received setenforce notice (enforcing=1) Dec 13 14:07:19.705560 sudo[2015]: pam_unix(sudo:session): session closed for user root Dec 13 14:07:19.785530 sshd[2011]: pam_unix(sshd:session): session closed for user core Dec 13 14:07:19.788776 systemd-logind[1575]: Session 7 logged out. Waiting for processes to exit. Dec 13 14:07:19.789665 systemd[1]: sshd@4-10.200.20.36:22-10.200.16.10:51008.service: Deactivated successfully. Dec 13 14:07:19.790447 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 14:07:19.790935 systemd-logind[1575]: Removed session 7. Dec 13 14:07:19.854417 systemd[1]: Started sshd@5-10.200.20.36:22-10.200.16.10:51016.service. Dec 13 14:07:20.284108 sshd[2019]: Accepted publickey for core from 10.200.16.10 port 51016 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:07:20.285443 sshd[2019]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:07:20.289422 systemd-logind[1575]: New session 8 of user core. Dec 13 14:07:20.289857 systemd[1]: Started session-8.scope. Dec 13 14:07:20.528394 sudo[2024]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 14:07:20.528959 sudo[2024]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:07:20.531899 sudo[2024]: pam_unix(sudo:session): session closed for user root Dec 13 14:07:20.536706 sudo[2023]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 14:07:20.537156 sudo[2023]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:07:20.546037 systemd[1]: Stopping audit-rules.service... Dec 13 14:07:20.551317 kernel: kauditd_printk_skb: 32 callbacks suppressed Dec 13 14:07:20.551464 kernel: audit: type=1305 audit(1734098840.546:165): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 13 14:07:20.546000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 13 14:07:20.551798 auditctl[2027]: No rules Dec 13 14:07:20.552316 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 14:07:20.552575 systemd[1]: Stopped audit-rules.service. Dec 13 14:07:20.554468 systemd[1]: Starting audit-rules.service... Dec 13 14:07:20.546000 audit[2027]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffddd5eab0 a2=420 a3=0 items=0 ppid=1 pid=2027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:07:20.585884 kernel: audit: type=1300 audit(1734098840.546:165): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffddd5eab0 a2=420 a3=0 items=0 ppid=1 pid=2027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:07:20.546000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Dec 13 14:07:20.592732 kernel: audit: type=1327 audit(1734098840.546:165): proctitle=2F7362696E2F617564697463746C002D44 Dec 13 14:07:20.551000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:07:20.608351 kernel: audit: type=1131 audit(1734098840.551:166): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:07:20.616043 augenrules[2045]: No rules Dec 13 14:07:20.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:07:20.618435 sudo[2023]: pam_unix(sudo:session): session closed for user root Dec 13 14:07:20.617073 systemd[1]: Finished audit-rules.service. Dec 13 14:07:20.617000 audit[2023]: USER_END pid=2023 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:07:20.650412 kernel: audit: type=1130 audit(1734098840.616:167): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:07:20.650486 kernel: audit: type=1106 audit(1734098840.617:168): pid=2023 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:07:20.650510 kernel: audit: type=1104 audit(1734098840.617:169): pid=2023 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:07:20.617000 audit[2023]: CRED_DISP pid=2023 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:07:20.699671 sshd[2019]: pam_unix(sshd:session): session closed for user core Dec 13 14:07:20.700000 audit[2019]: USER_END pid=2019 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:07:20.723540 systemd[1]: sshd@5-10.200.20.36:22-10.200.16.10:51016.service: Deactivated successfully. Dec 13 14:07:20.700000 audit[2019]: CRED_DISP pid=2019 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:07:20.741643 kernel: audit: type=1106 audit(1734098840.700:170): pid=2019 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:07:20.741781 kernel: audit: type=1104 audit(1734098840.700:171): pid=2019 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:07:20.723000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.20.36:22-10.200.16.10:51016 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:07:20.741900 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 14:07:20.742107 systemd-logind[1575]: Session 8 logged out. Waiting for processes to exit. Dec 13 14:07:20.759047 kernel: audit: type=1131 audit(1734098840.723:172): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.20.36:22-10.200.16.10:51016 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:07:20.759576 systemd-logind[1575]: Removed session 8. Dec 13 14:07:20.765806 systemd[1]: Started sshd@6-10.200.20.36:22-10.200.16.10:51032.service. Dec 13 14:07:20.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.36:22-10.200.16.10:51032 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:07:21.182000 audit[2052]: USER_ACCT pid=2052 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:07:21.183574 sshd[2052]: Accepted publickey for core from 10.200.16.10 port 51032 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:07:21.183000 audit[2052]: CRED_ACQ pid=2052 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:07:21.184000 audit[2052]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcf017980 a2=3 a3=1 items=0 ppid=1 pid=2052 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:07:21.184000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:07:21.185251 sshd[2052]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:07:21.189252 systemd-logind[1575]: New session 9 of user core. Dec 13 14:07:21.189687 systemd[1]: Started session-9.scope. Dec 13 14:07:21.193000 audit[2052]: USER_START pid=2052 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:07:21.195000 audit[2055]: CRED_ACQ pid=2055 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:07:21.417000 audit[2056]: USER_ACCT pid=2056 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:07:21.418180 sudo[2056]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 14:07:21.417000 audit[2056]: CRED_REFR pid=2056 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:07:21.418396 sudo[2056]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:07:21.419000 audit[2056]: USER_START pid=2056 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:07:21.439737 systemd[1]: Starting docker.service... Dec 13 14:07:21.475709 env[2066]: time="2024-12-13T14:07:21.475652108Z" level=info msg="Starting up" Dec 13 14:07:21.479048 env[2066]: time="2024-12-13T14:07:21.479018153Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 14:07:21.479169 env[2066]: time="2024-12-13T14:07:21.479155215Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 14:07:21.479238 env[2066]: time="2024-12-13T14:07:21.479222607Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 14:07:21.479298 env[2066]: time="2024-12-13T14:07:21.479286238Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 14:07:21.480794 env[2066]: time="2024-12-13T14:07:21.480771007Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 14:07:21.480892 env[2066]: time="2024-12-13T14:07:21.480878193Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 14:07:21.480949 env[2066]: time="2024-12-13T14:07:21.480935825Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 14:07:21.481000 env[2066]: time="2024-12-13T14:07:21.480989698Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 14:07:21.570253 env[2066]: time="2024-12-13T14:07:21.570216175Z" level=warning msg="Your kernel does not support cgroup blkio weight" Dec 13 14:07:21.570436 env[2066]: time="2024-12-13T14:07:21.570423228Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Dec 13 14:07:21.570655 env[2066]: time="2024-12-13T14:07:21.570640720Z" level=info msg="Loading containers: start." Dec 13 14:07:21.604000 audit[2094]: NETFILTER_CFG table=nat:5 family=2 entries=2 op=nft_register_chain pid=2094 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:07:21.604000 audit[2094]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=fffff4d37730 a2=0 a3=1 items=0 ppid=2066 pid=2094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:07:21.604000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 13 14:07:21.606000 audit[2096]: NETFILTER_CFG table=filter:6 family=2 entries=2 op=nft_register_chain pid=2096 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:07:21.606000 audit[2096]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffe1858fa0 a2=0 a3=1 items=0 ppid=2066 pid=2096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:07:21.606000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 13 14:07:21.608000 audit[2098]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2098 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:07:21.608000 audit[2098]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffe73dee20 a2=0 a3=1 items=0 ppid=2066 pid=2098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:07:21.608000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 13 14:07:21.610000 audit[2100]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2100 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:07:21.610000 audit[2100]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffc721a6c0 a2=0 a3=1 items=0 ppid=2066 pid=2100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:07:21.610000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 13 14:07:21.611000 audit[2102]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=2102 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:07:21.611000 audit[2102]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffd3e8c530 a2=0 a3=1 items=0 ppid=2066 pid=2102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:07:21.611000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Dec 13 14:07:21.613000 audit[2104]: NETFILTER_CFG table=filter:10 family=2 entries=1 op=nft_register_rule pid=2104 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:07:21.613000 audit[2104]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffc9e9edb0 a2=0 a3=1 items=0 ppid=2066 pid=2104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:07:21.613000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Dec 13 14:07:21.626000 audit[2106]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_register_chain pid=2106 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:07:21.626000 audit[2106]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffee62b8c0 a2=0 a3=1 items=0 ppid=2066 pid=2106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:07:21.626000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 13 14:07:21.628000 audit[2108]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2108 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:07:21.628000 audit[2108]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffe7dd0270 a2=0 a3=1 items=0 ppid=2066 pid=2108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:07:21.628000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 13 14:07:21.630000 audit[2110]: NETFILTER_CFG table=filter:13 family=2 entries=2 op=nft_register_chain pid=2110 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:07:21.630000 audit[2110]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=308 a0=3 a1=fffff762e520 a2=0 a3=1 items=0 ppid=2066 pid=2110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:07:21.630000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 13 14:07:21.644000 audit[2114]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_unregister_rule pid=2114 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:07:21.644000 audit[2114]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffff0990a0 a2=0 a3=1 items=0 ppid=2066 pid=2114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:07:21.644000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Dec 13 14:07:21.654000 audit[2115]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=2115 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:07:21.654000 audit[2115]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffffbf0730 a2=0 a3=1 items=0 ppid=2066 pid=2115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:07:21.654000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 13 14:07:21.671624 kernel: Initializing XFRM netlink socket Dec 13 14:07:21.684715 env[2066]: time="2024-12-13T14:07:21.684681472Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Dec 13 14:07:21.716000 audit[2123]: NETFILTER_CFG table=nat:16 family=2 entries=2 op=nft_register_chain pid=2123 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:07:21.716000 audit[2123]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=492 a0=3 a1=ffffdab55150 a2=0 a3=1 items=0 ppid=2066 pid=2123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:07:21.716000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 13 14:07:21.725000 audit[2126]: NETFILTER_CFG table=nat:17 family=2 entries=1 op=nft_register_rule pid=2126 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:07:21.725000 audit[2126]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=fffff1f916e0 a2=0 a3=1 items=0 ppid=2066 pid=2126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:07:21.725000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 13 14:07:21.728000 audit[2129]: NETFILTER_CFG table=filter:18 family=2 entries=1 op=nft_register_rule pid=2129 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:07:21.728000 audit[2129]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffe68c6c40 a2=0 a3=1 items=0 ppid=2066 pid=2129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:07:21.728000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Dec 13 14:07:21.730000 audit[2131]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=2131 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:07:21.730000 audit[2131]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=fffff4fde090 a2=0 a3=1 items=0 ppid=2066 pid=2131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:07:21.730000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Dec 13 14:07:21.732000 audit[2133]: NETFILTER_CFG table=nat:20 family=2 entries=2 op=nft_register_chain pid=2133 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:07:21.732000 audit[2133]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=356 a0=3 a1=ffffc09cfa30 a2=0 a3=1 items=0 ppid=2066 pid=2133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:07:21.732000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 13 14:07:21.734000 audit[2135]: NETFILTER_CFG table=nat:21 family=2 entries=2 op=nft_register_chain pid=2135 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:07:21.734000 audit[2135]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=444 a0=3 a1=ffffeb0690e0 a2=0 a3=1 items=0 ppid=2066 pid=2135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:07:21.734000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 13 14:07:21.736000 audit[2137]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=2137 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:07:21.736000 audit[2137]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=304 a0=3 a1=ffffe83472d0 a2=0 a3=1 items=0 ppid=2066 pid=2137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:07:21.736000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Dec 13 14:07:21.738000 audit[2139]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=2139 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:07:21.738000 audit[2139]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=508 a0=3 a1=ffffd9f3c600 a2=0 a3=1 items=0 ppid=2066 pid=2139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:07:21.738000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 13 14:07:21.740000 audit[2141]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_register_rule pid=2141 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:07:21.740000 audit[2141]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=240 a0=3 a1=fffff28a7970 a2=0 a3=1 items=0 ppid=2066 pid=2141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:07:21.740000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 13 14:07:21.742000 audit[2143]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=2143 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:07:21.742000 audit[2143]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffdcc84f10 a2=0 a3=1 items=0 ppid=2066 pid=2143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:07:21.742000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 13 14:07:21.744000 audit[2145]: NETFILTER_CFG table=filter:26 family=2 entries=1 op=nft_register_rule pid=2145 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:07:21.744000 audit[2145]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffe36c76f0 a2=0 a3=1 items=0 ppid=2066 pid=2145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:07:21.744000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 13 14:07:21.745259 systemd-networkd[1760]: docker0: Link UP Dec 13 14:07:21.759000 audit[2149]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_unregister_rule pid=2149 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:07:21.759000 audit[2149]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffd404d7d0 a2=0 a3=1 items=0 ppid=2066 pid=2149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:07:21.759000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Dec 13 14:07:21.770000 audit[2150]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_rule pid=2150 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:07:21.770000 audit[2150]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffe0566e60 a2=0 a3=1 items=0 ppid=2066 pid=2150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:07:21.770000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 13 14:07:21.771203 env[2066]: time="2024-12-13T14:07:21.771175941Z" level=info msg="Loading containers: done." Dec 13 14:07:21.782190 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2935747991-merged.mount: Deactivated successfully. Dec 13 14:07:21.804780 env[2066]: time="2024-12-13T14:07:21.804739527Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 14:07:21.805113 env[2066]: time="2024-12-13T14:07:21.805096721Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Dec 13 14:07:21.805274 env[2066]: time="2024-12-13T14:07:21.805259739Z" level=info msg="Daemon has completed initialization" Dec 13 14:07:21.828341 systemd[1]: Started docker.service. Dec 13 14:07:21.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:07:21.832189 env[2066]: time="2024-12-13T14:07:21.832134949Z" level=info msg="API listen on /run/docker.sock" Dec 13 14:07:27.892854 env[1588]: time="2024-12-13T14:07:27.892768326Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 14:07:27.893415 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Dec 13 14:07:27.893577 systemd[1]: Stopped kubelet.service. Dec 13 14:07:27.907614 kernel: kauditd_printk_skb: 84 callbacks suppressed Dec 13 14:07:27.907748 kernel: audit: type=1130 audit(1734098847.893:207): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:07:27.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:07:27.895168 systemd[1]: Starting kubelet.service... Dec 13 14:07:27.893000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:07:27.944539 kernel: audit: type=1131 audit(1734098847.893:208): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:07:28.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:07:28.082012 systemd[1]: Started kubelet.service. Dec 13 14:07:28.100769 kernel: audit: type=1130 audit(1734098848.081:209): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:07:28.139624 kubelet[2196]: E1213 14:07:28.139548 2196 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:07:28.141000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 14:07:28.141768 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:07:28.141952 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:07:28.161640 kernel: audit: type=1131 audit(1734098848.141:210): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 14:07:28.920466 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4146377297.mount: Deactivated successfully. Dec 13 14:07:30.891388 env[1588]: time="2024-12-13T14:07:30.891341159Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:30.897654 env[1588]: time="2024-12-13T14:07:30.897607040Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:30.900755 env[1588]: time="2024-12-13T14:07:30.900717203Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:30.905225 env[1588]: time="2024-12-13T14:07:30.905189788Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:30.905909 env[1588]: time="2024-12-13T14:07:30.905881317Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\"" Dec 13 14:07:30.915675 env[1588]: time="2024-12-13T14:07:30.915640323Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 14:07:33.358800 env[1588]: time="2024-12-13T14:07:33.358747853Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:33.367375 env[1588]: time="2024-12-13T14:07:33.367319484Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:33.374213 env[1588]: time="2024-12-13T14:07:33.374174997Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:33.378578 env[1588]: time="2024-12-13T14:07:33.378535785Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:33.379399 env[1588]: time="2024-12-13T14:07:33.379370026Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\"" Dec 13 14:07:33.388748 env[1588]: time="2024-12-13T14:07:33.388716784Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 14:07:35.283706 env[1588]: time="2024-12-13T14:07:35.283659147Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:35.287683 env[1588]: time="2024-12-13T14:07:35.287649469Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:35.291216 env[1588]: time="2024-12-13T14:07:35.291174192Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:35.294053 env[1588]: time="2024-12-13T14:07:35.294014737Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:35.294814 env[1588]: time="2024-12-13T14:07:35.294786628Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\"" Dec 13 14:07:35.304268 env[1588]: time="2024-12-13T14:07:35.304233300Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 14:07:36.391966 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount255067270.mount: Deactivated successfully. Dec 13 14:07:36.928747 env[1588]: time="2024-12-13T14:07:36.928689018Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:36.935509 env[1588]: time="2024-12-13T14:07:36.935460905Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:36.939724 env[1588]: time="2024-12-13T14:07:36.939671337Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:36.942523 env[1588]: time="2024-12-13T14:07:36.942479531Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:36.942855 env[1588]: time="2024-12-13T14:07:36.942823940Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\"" Dec 13 14:07:36.952464 env[1588]: time="2024-12-13T14:07:36.952420340Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 14:07:37.552049 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3975156642.mount: Deactivated successfully. Dec 13 14:07:38.198811 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Dec 13 14:07:38.198980 systemd[1]: Stopped kubelet.service. Dec 13 14:07:38.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:07:38.200495 systemd[1]: Starting kubelet.service... Dec 13 14:07:38.243612 kernel: audit: type=1130 audit(1734098858.198:211): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:07:38.243733 kernel: audit: type=1131 audit(1734098858.198:212): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:07:38.198000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:07:38.554882 systemd[1]: Started kubelet.service. Dec 13 14:07:38.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:07:38.576633 kernel: audit: type=1130 audit(1734098858.554:213): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:07:38.606185 kubelet[2232]: E1213 14:07:38.606140 2232 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:07:38.608318 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:07:38.608456 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:07:38.608000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 14:07:38.626757 kernel: audit: type=1131 audit(1734098858.608:214): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 14:07:40.452652 env[1588]: time="2024-12-13T14:07:40.451827159Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:40.457444 env[1588]: time="2024-12-13T14:07:40.457031665Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:40.460685 env[1588]: time="2024-12-13T14:07:40.460628419Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:40.464209 env[1588]: time="2024-12-13T14:07:40.464160539Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:40.465050 env[1588]: time="2024-12-13T14:07:40.465022030Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Dec 13 14:07:40.474514 env[1588]: time="2024-12-13T14:07:40.474473919Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 14:07:41.073908 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1521194780.mount: Deactivated successfully. Dec 13 14:07:41.093460 env[1588]: time="2024-12-13T14:07:41.093412531Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:41.100696 env[1588]: time="2024-12-13T14:07:41.100657849Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:41.104245 env[1588]: time="2024-12-13T14:07:41.104182696Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:41.108947 env[1588]: time="2024-12-13T14:07:41.108910089Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:41.109324 env[1588]: time="2024-12-13T14:07:41.109296499Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Dec 13 14:07:41.118445 env[1588]: time="2024-12-13T14:07:41.118400193Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 14:07:41.767883 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1420467460.mount: Deactivated successfully. Dec 13 14:07:43.912710 env[1588]: time="2024-12-13T14:07:43.912654139Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:43.918450 env[1588]: time="2024-12-13T14:07:43.918412432Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:43.923253 env[1588]: time="2024-12-13T14:07:43.923218557Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:43.927743 env[1588]: time="2024-12-13T14:07:43.927708624Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:43.928751 env[1588]: time="2024-12-13T14:07:43.928723309Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Dec 13 14:07:48.698778 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Dec 13 14:07:48.698940 systemd[1]: Stopped kubelet.service. Dec 13 14:07:48.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:07:48.700488 systemd[1]: Starting kubelet.service... Dec 13 14:07:48.697000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:07:48.739061 kernel: audit: type=1130 audit(1734098868.697:215): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:07:48.739157 kernel: audit: type=1131 audit(1734098868.697:216): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:07:49.030104 systemd[1]: Started kubelet.service. Dec 13 14:07:49.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:07:49.054653 kernel: audit: type=1130 audit(1734098869.029:217): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:07:49.132661 kubelet[2314]: E1213 14:07:49.132580 2314 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:07:49.134000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 14:07:49.134774 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:07:49.134931 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:07:49.158628 kernel: audit: type=1131 audit(1734098869.134:218): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 14:07:49.946399 systemd[1]: Stopped kubelet.service. Dec 13 14:07:49.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:07:49.949402 systemd[1]: Starting kubelet.service... Dec 13 14:07:49.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:07:49.983921 kernel: audit: type=1130 audit(1734098869.946:219): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:07:49.984010 kernel: audit: type=1131 audit(1734098869.946:220): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:07:49.990766 systemd[1]: Reloading. Dec 13 14:07:50.069716 /usr/lib/systemd/system-generators/torcx-generator[2349]: time="2024-12-13T14:07:50Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:07:50.069746 /usr/lib/systemd/system-generators/torcx-generator[2349]: time="2024-12-13T14:07:50Z" level=info msg="torcx already run" Dec 13 14:07:50.165302 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:07:50.165321 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:07:50.182574 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:07:50.270534 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 14:07:50.270617 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 14:07:50.271064 systemd[1]: Stopped kubelet.service. Dec 13 14:07:50.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 14:07:50.284248 systemd[1]: Starting kubelet.service... Dec 13 14:07:50.289637 kernel: audit: type=1130 audit(1734098870.270:221): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 14:07:50.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:07:50.401662 systemd[1]: Started kubelet.service. Dec 13 14:07:50.421679 kernel: audit: type=1130 audit(1734098870.403:222): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:07:50.497355 kubelet[2426]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:07:50.497355 kubelet[2426]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:07:50.497355 kubelet[2426]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:07:50.498578 kubelet[2426]: I1213 14:07:50.498515 2426 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:07:51.567513 kubelet[2426]: I1213 14:07:51.567472 2426 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 14:07:51.567513 kubelet[2426]: I1213 14:07:51.567506 2426 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:07:51.568388 kubelet[2426]: I1213 14:07:51.567724 2426 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 14:07:51.584335 kubelet[2426]: E1213 14:07:51.584303 2426 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.36:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.36:6443: connect: connection refused Dec 13 14:07:51.586715 kubelet[2426]: I1213 14:07:51.586694 2426 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:07:51.596176 kubelet[2426]: I1213 14:07:51.596151 2426 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:07:51.597886 kubelet[2426]: I1213 14:07:51.597864 2426 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:07:51.598197 kubelet[2426]: I1213 14:07:51.598178 2426 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:07:51.598322 kubelet[2426]: I1213 14:07:51.598311 2426 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:07:51.598389 kubelet[2426]: I1213 14:07:51.598380 2426 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:07:51.598564 kubelet[2426]: I1213 14:07:51.598553 2426 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:07:51.600931 kubelet[2426]: I1213 14:07:51.600912 2426 kubelet.go:396] "Attempting to sync node with API server" Dec 13 14:07:51.601032 kubelet[2426]: I1213 14:07:51.601022 2426 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:07:51.601484 kubelet[2426]: W1213 14:07:51.601440 2426 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.20.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.6-a-c740448bc5&limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Dec 13 14:07:51.601545 kubelet[2426]: E1213 14:07:51.601500 2426 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.6-a-c740448bc5&limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Dec 13 14:07:51.601744 kubelet[2426]: I1213 14:07:51.601729 2426 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:07:51.604336 kubelet[2426]: I1213 14:07:51.604318 2426 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:07:51.607196 kubelet[2426]: W1213 14:07:51.607043 2426 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.20.36:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Dec 13 14:07:51.607196 kubelet[2426]: E1213 14:07:51.607087 2426 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.36:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Dec 13 14:07:51.607428 kubelet[2426]: I1213 14:07:51.607413 2426 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:07:51.607870 kubelet[2426]: I1213 14:07:51.607855 2426 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:07:51.608438 kubelet[2426]: W1213 14:07:51.608420 2426 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 14:07:51.609310 kubelet[2426]: I1213 14:07:51.609294 2426 server.go:1256] "Started kubelet" Dec 13 14:07:51.610000 audit[2426]: AVC avc: denied { mac_admin } for pid=2426 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:07:51.617781 kubelet[2426]: E1213 14:07:51.613114 2426 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.36:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.36:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.6-a-c740448bc5.1810c1bc6facfdb2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.6-a-c740448bc5,UID:ci-3510.3.6-a-c740448bc5,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.6-a-c740448bc5,},FirstTimestamp:2024-12-13 14:07:51.60926149 +0000 UTC m=+1.197540633,LastTimestamp:2024-12-13 14:07:51.60926149 +0000 UTC m=+1.197540633,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.6-a-c740448bc5,}" Dec 13 14:07:51.617781 kubelet[2426]: I1213 14:07:51.613191 2426 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:07:51.617781 kubelet[2426]: I1213 14:07:51.613433 2426 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:07:51.617781 kubelet[2426]: I1213 14:07:51.613481 2426 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:07:51.617781 kubelet[2426]: I1213 14:07:51.614170 2426 server.go:461] "Adding debug handlers to kubelet server" Dec 13 14:07:51.617781 kubelet[2426]: I1213 14:07:51.615663 2426 kubelet.go:1417] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Dec 13 14:07:51.617781 kubelet[2426]: I1213 14:07:51.615695 2426 kubelet.go:1421] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Dec 13 14:07:51.617781 kubelet[2426]: I1213 14:07:51.616409 2426 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:07:51.627761 kubelet[2426]: I1213 14:07:51.627730 2426 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:07:51.610000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 14:07:51.630396 kubelet[2426]: E1213 14:07:51.628987 2426 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.6-a-c740448bc5?timeout=10s\": dial tcp 10.200.20.36:6443: connect: connection refused" interval="200ms" Dec 13 14:07:51.637758 kernel: audit: type=1400 audit(1734098871.610:223): avc: denied { mac_admin } for pid=2426 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:07:51.637879 kernel: audit: type=1401 audit(1734098871.610:223): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 14:07:51.610000 audit[2426]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=40008fa990 a1=4000b45b78 a2=40008fa960 a3=25 items=0 ppid=1 pid=2426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:07:51.610000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 14:07:51.615000 audit[2426]: AVC avc: denied { mac_admin } for pid=2426 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:07:51.615000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 14:07:51.615000 audit[2426]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000a15f20 a1=4000b45b90 a2=40008faa20 a3=25 items=0 ppid=1 pid=2426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:07:51.615000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 14:07:51.623000 audit[2436]: NETFILTER_CFG table=mangle:29 family=2 entries=2 op=nft_register_chain pid=2436 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:07:51.623000 audit[2436]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffeef9ed50 a2=0 a3=1 items=0 ppid=2426 pid=2436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:07:51.623000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 13 14:07:51.624000 audit[2437]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_chain pid=2437 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:07:51.624000 audit[2437]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff98563a0 a2=0 a3=1 items=0 ppid=2426 pid=2437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:07:51.624000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 13 14:07:51.628000 audit[2439]: NETFILTER_CFG table=filter:31 family=2 entries=2 op=nft_register_chain pid=2439 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:07:51.628000 audit[2439]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffc4f30930 a2=0 a3=1 items=0 ppid=2426 pid=2439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:07:51.628000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 14:07:51.639124 kubelet[2426]: I1213 14:07:51.639081 2426 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 14:07:51.639227 kubelet[2426]: I1213 14:07:51.639181 2426 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 14:07:51.639000 audit[2441]: NETFILTER_CFG table=filter:32 family=2 entries=2 op=nft_register_chain pid=2441 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:07:51.639000 audit[2441]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=fffff33da8e0 a2=0 a3=1 items=0 ppid=2426 pid=2441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:07:51.639000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 14:07:51.641140 kubelet[2426]: I1213 14:07:51.641115 2426 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:07:51.641373 kubelet[2426]: I1213 14:07:51.641354 2426 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:07:51.643252 kubelet[2426]: I1213 14:07:51.643232 2426 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:07:51.651013 kubelet[2426]: W1213 14:07:51.650953 2426 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.20.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Dec 13 14:07:51.651013 kubelet[2426]: E1213 14:07:51.651014 2426 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Dec 13 14:07:51.761000 audit[2447]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=2447 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:07:51.761000 audit[2447]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=fffff18b5730 a2=0 a3=1 items=0 ppid=2426 pid=2447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:07:51.761000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Dec 13 14:07:51.762029 kubelet[2426]: I1213 14:07:51.761993 2426 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:07:51.762000 audit[2449]: NETFILTER_CFG table=mangle:34 family=10 entries=2 op=nft_register_chain pid=2449 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:07:51.762000 audit[2449]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffdc4e3490 a2=0 a3=1 items=0 ppid=2426 pid=2449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:07:51.762000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 13 14:07:51.763148 kubelet[2426]: I1213 14:07:51.763120 2426 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:07:51.763208 kubelet[2426]: I1213 14:07:51.763181 2426 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:07:51.763208 kubelet[2426]: I1213 14:07:51.763202 2426 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 14:07:51.763258 kubelet[2426]: E1213 14:07:51.763253 2426 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:07:51.763000 audit[2450]: NETFILTER_CFG table=mangle:35 family=2 entries=1 op=nft_register_chain pid=2450 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:07:51.763000 audit[2450]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd1de9f80 a2=0 a3=1 items=0 ppid=2426 pid=2450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:07:51.763000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 13 14:07:51.764508 kubelet[2426]: W1213 14:07:51.764419 2426 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.20.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Dec 13 14:07:51.764607 kubelet[2426]: E1213 14:07:51.764515 2426 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Dec 13 14:07:51.765000 audit[2454]: NETFILTER_CFG table=nat:36 family=2 entries=1 op=nft_register_chain pid=2454 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:07:51.765000 audit[2454]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd7079a30 a2=0 a3=1 items=0 ppid=2426 pid=2454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:07:51.765000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 13 14:07:51.765000 audit[2453]: NETFILTER_CFG table=mangle:37 family=10 entries=1 op=nft_register_chain pid=2453 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:07:51.765000 audit[2453]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd925e860 a2=0 a3=1 items=0 ppid=2426 pid=2453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:07:51.765000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 13 14:07:51.766000 audit[2455]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_chain pid=2455 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:07:51.766000 audit[2455]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc4138540 a2=0 a3=1 items=0 ppid=2426 pid=2455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:07:51.766000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 13 14:07:51.767000 audit[2456]: NETFILTER_CFG table=nat:39 family=10 entries=2 op=nft_register_chain pid=2456 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:07:51.767000 audit[2456]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=128 a0=3 a1=ffffce54ad80 a2=0 a3=1 items=0 ppid=2426 pid=2456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:07:51.767000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 13 14:07:51.768000 audit[2457]: NETFILTER_CFG table=filter:40 family=10 entries=2 op=nft_register_chain pid=2457 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:07:51.768000 audit[2457]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffd9f4ceb0 a2=0 a3=1 items=0 ppid=2426 pid=2457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:07:51.768000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 13 14:07:51.809680 kubelet[2426]: I1213 14:07:51.809653 2426 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-c740448bc5" Dec 13 14:07:51.810221 kubelet[2426]: E1213 14:07:51.810197 2426 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.36:6443/api/v1/nodes\": dial tcp 10.200.20.36:6443: connect: connection refused" node="ci-3510.3.6-a-c740448bc5" Dec 13 14:07:51.810631 kubelet[2426]: I1213 14:07:51.810616 2426 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:07:51.810727 kubelet[2426]: I1213 14:07:51.810716 2426 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:07:51.810802 kubelet[2426]: I1213 14:07:51.810793 2426 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:07:51.814983 kubelet[2426]: I1213 14:07:51.814953 2426 policy_none.go:49] "None policy: Start" Dec 13 14:07:51.816009 kubelet[2426]: I1213 14:07:51.815984 2426 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:07:51.816088 kubelet[2426]: I1213 14:07:51.816031 2426 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:07:51.821000 audit[2426]: AVC avc: denied { mac_admin } for pid=2426 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:07:51.821000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 14:07:51.821000 audit[2426]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000a7cab0 a1=4000b45230 a2=4000a7ca80 a3=25 items=0 ppid=1 pid=2426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:07:51.822589 kubelet[2426]: I1213 14:07:51.822336 2426 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:07:51.821000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 14:07:51.824442 kubelet[2426]: I1213 14:07:51.824368 2426 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Dec 13 14:07:51.824620 kubelet[2426]: I1213 14:07:51.824587 2426 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:07:51.829393 kubelet[2426]: E1213 14:07:51.829359 2426 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.6-a-c740448bc5?timeout=10s\": dial tcp 10.200.20.36:6443: connect: connection refused" interval="400ms" Dec 13 14:07:51.829500 kubelet[2426]: E1213 14:07:51.829477 2426 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.6-a-c740448bc5\" not found" Dec 13 14:07:51.863967 kubelet[2426]: I1213 14:07:51.863937 2426 topology_manager.go:215] "Topology Admit Handler" podUID="04fd05d10854301aa332942f541428db" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.6-a-c740448bc5" Dec 13 14:07:51.865591 kubelet[2426]: I1213 14:07:51.865561 2426 topology_manager.go:215] "Topology Admit Handler" podUID="cecc3f31cfb24345f56217dfb5322223" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.6-a-c740448bc5" Dec 13 14:07:51.869186 kubelet[2426]: I1213 14:07:51.869144 2426 topology_manager.go:215] "Topology Admit Handler" podUID="44de71fcc57eddc926616b80695a594a" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.6-a-c740448bc5" Dec 13 14:07:52.011797 kubelet[2426]: I1213 14:07:52.011760 2426 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-c740448bc5" Dec 13 14:07:52.012147 kubelet[2426]: E1213 14:07:52.012127 2426 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.36:6443/api/v1/nodes\": dial tcp 10.200.20.36:6443: connect: connection refused" node="ci-3510.3.6-a-c740448bc5" Dec 13 14:07:52.040547 kubelet[2426]: I1213 14:07:52.040509 2426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/04fd05d10854301aa332942f541428db-kubeconfig\") pod \"kube-scheduler-ci-3510.3.6-a-c740448bc5\" (UID: \"04fd05d10854301aa332942f541428db\") " pod="kube-system/kube-scheduler-ci-3510.3.6-a-c740448bc5" Dec 13 14:07:52.040661 kubelet[2426]: I1213 14:07:52.040557 2426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cecc3f31cfb24345f56217dfb5322223-ca-certs\") pod \"kube-apiserver-ci-3510.3.6-a-c740448bc5\" (UID: \"cecc3f31cfb24345f56217dfb5322223\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-c740448bc5" Dec 13 14:07:52.040661 kubelet[2426]: I1213 14:07:52.040582 2426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cecc3f31cfb24345f56217dfb5322223-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.6-a-c740448bc5\" (UID: \"cecc3f31cfb24345f56217dfb5322223\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-c740448bc5" Dec 13 14:07:52.040661 kubelet[2426]: I1213 14:07:52.040626 2426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/44de71fcc57eddc926616b80695a594a-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.6-a-c740448bc5\" (UID: \"44de71fcc57eddc926616b80695a594a\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-c740448bc5" Dec 13 14:07:52.040661 kubelet[2426]: I1213 14:07:52.040655 2426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/44de71fcc57eddc926616b80695a594a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.6-a-c740448bc5\" (UID: \"44de71fcc57eddc926616b80695a594a\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-c740448bc5" Dec 13 14:07:52.040765 kubelet[2426]: I1213 14:07:52.040677 2426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cecc3f31cfb24345f56217dfb5322223-k8s-certs\") pod \"kube-apiserver-ci-3510.3.6-a-c740448bc5\" (UID: \"cecc3f31cfb24345f56217dfb5322223\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-c740448bc5" Dec 13 14:07:52.040765 kubelet[2426]: I1213 14:07:52.040701 2426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/44de71fcc57eddc926616b80695a594a-ca-certs\") pod \"kube-controller-manager-ci-3510.3.6-a-c740448bc5\" (UID: \"44de71fcc57eddc926616b80695a594a\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-c740448bc5" Dec 13 14:07:52.040765 kubelet[2426]: I1213 14:07:52.040722 2426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/44de71fcc57eddc926616b80695a594a-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.6-a-c740448bc5\" (UID: \"44de71fcc57eddc926616b80695a594a\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-c740448bc5" Dec 13 14:07:52.040765 kubelet[2426]: I1213 14:07:52.040742 2426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/44de71fcc57eddc926616b80695a594a-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.6-a-c740448bc5\" (UID: \"44de71fcc57eddc926616b80695a594a\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-c740448bc5" Dec 13 14:07:52.170947 env[1588]: time="2024-12-13T14:07:52.170696186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.6-a-c740448bc5,Uid:04fd05d10854301aa332942f541428db,Namespace:kube-system,Attempt:0,}" Dec 13 14:07:52.175349 env[1588]: time="2024-12-13T14:07:52.175302988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.6-a-c740448bc5,Uid:cecc3f31cfb24345f56217dfb5322223,Namespace:kube-system,Attempt:0,}" Dec 13 14:07:52.180652 env[1588]: time="2024-12-13T14:07:52.180585709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.6-a-c740448bc5,Uid:44de71fcc57eddc926616b80695a594a,Namespace:kube-system,Attempt:0,}" Dec 13 14:07:52.230467 kubelet[2426]: E1213 14:07:52.230438 2426 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.6-a-c740448bc5?timeout=10s\": dial tcp 10.200.20.36:6443: connect: connection refused" interval="800ms" Dec 13 14:07:52.414625 kubelet[2426]: I1213 14:07:52.414526 2426 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-c740448bc5" Dec 13 14:07:52.414930 kubelet[2426]: E1213 14:07:52.414888 2426 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.36:6443/api/v1/nodes\": dial tcp 10.200.20.36:6443: connect: connection refused" node="ci-3510.3.6-a-c740448bc5" Dec 13 14:07:52.614973 kubelet[2426]: W1213 14:07:52.614880 2426 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.20.36:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Dec 13 14:07:52.614973 kubelet[2426]: E1213 14:07:52.614952 2426 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.36:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Dec 13 14:07:52.683700 kubelet[2426]: W1213 14:07:52.683636 2426 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.20.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Dec 13 14:07:52.683700 kubelet[2426]: E1213 14:07:52.683679 2426 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Dec 13 14:07:52.756453 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2680761858.mount: Deactivated successfully. Dec 13 14:07:52.771438 kubelet[2426]: W1213 14:07:52.771375 2426 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.20.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Dec 13 14:07:52.771438 kubelet[2426]: E1213 14:07:52.771436 2426 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Dec 13 14:07:52.792560 env[1588]: time="2024-12-13T14:07:52.792515003Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:52.794465 env[1588]: time="2024-12-13T14:07:52.794440567Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:52.803243 env[1588]: time="2024-12-13T14:07:52.803187198Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:52.805150 env[1588]: time="2024-12-13T14:07:52.805125081Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:52.808278 env[1588]: time="2024-12-13T14:07:52.808249412Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:52.812526 env[1588]: time="2024-12-13T14:07:52.812485556Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:52.815829 env[1588]: time="2024-12-13T14:07:52.815795396Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:52.818388 env[1588]: time="2024-12-13T14:07:52.818354681Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:52.821108 env[1588]: time="2024-12-13T14:07:52.821078677Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:52.824656 env[1588]: time="2024-12-13T14:07:52.824625462Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:52.833051 env[1588]: time="2024-12-13T14:07:52.833019315Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:52.844022 env[1588]: time="2024-12-13T14:07:52.843979493Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:52.901338 env[1588]: time="2024-12-13T14:07:52.891903476Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:07:52.901338 env[1588]: time="2024-12-13T14:07:52.891949633Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:07:52.901338 env[1588]: time="2024-12-13T14:07:52.891959633Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:07:52.901338 env[1588]: time="2024-12-13T14:07:52.892101584Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5439b1f12da5fa98b70dfdd2d06ffdd2904f5a569ca5cce8050da1f15ee6edc5 pid=2465 runtime=io.containerd.runc.v2 Dec 13 14:07:52.917161 env[1588]: time="2024-12-13T14:07:52.917057716Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:07:52.917161 env[1588]: time="2024-12-13T14:07:52.917106113Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:07:52.917161 env[1588]: time="2024-12-13T14:07:52.917117632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:07:52.917525 env[1588]: time="2024-12-13T14:07:52.917451292Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/18a0b39b479993385b8dc9533f8ceffc5ff0eafd25cbe1a01b976412c70c9ae5 pid=2493 runtime=io.containerd.runc.v2 Dec 13 14:07:52.934548 env[1588]: time="2024-12-13T14:07:52.933405048Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:07:52.934548 env[1588]: time="2024-12-13T14:07:52.933649553Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:07:52.934548 env[1588]: time="2024-12-13T14:07:52.933879219Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:07:52.934548 env[1588]: time="2024-12-13T14:07:52.934323752Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/64a380bb5d66d9b8c4926f2db7305cf947cd021b5c0018609db91b9b87bc1db5 pid=2517 runtime=io.containerd.runc.v2 Dec 13 14:07:52.970837 env[1588]: time="2024-12-13T14:07:52.970778149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.6-a-c740448bc5,Uid:44de71fcc57eddc926616b80695a594a,Namespace:kube-system,Attempt:0,} returns sandbox id \"5439b1f12da5fa98b70dfdd2d06ffdd2904f5a569ca5cce8050da1f15ee6edc5\"" Dec 13 14:07:52.980165 env[1588]: time="2024-12-13T14:07:52.978077748Z" level=info msg="CreateContainer within sandbox \"5439b1f12da5fa98b70dfdd2d06ffdd2904f5a569ca5cce8050da1f15ee6edc5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 14:07:53.005093 env[1588]: time="2024-12-13T14:07:53.005034684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.6-a-c740448bc5,Uid:04fd05d10854301aa332942f541428db,Namespace:kube-system,Attempt:0,} returns sandbox id \"18a0b39b479993385b8dc9533f8ceffc5ff0eafd25cbe1a01b976412c70c9ae5\"" Dec 13 14:07:53.008907 env[1588]: time="2024-12-13T14:07:53.008855658Z" level=info msg="CreateContainer within sandbox \"18a0b39b479993385b8dc9533f8ceffc5ff0eafd25cbe1a01b976412c70c9ae5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 14:07:53.014003 env[1588]: time="2024-12-13T14:07:53.013951876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.6-a-c740448bc5,Uid:cecc3f31cfb24345f56217dfb5322223,Namespace:kube-system,Attempt:0,} returns sandbox id \"64a380bb5d66d9b8c4926f2db7305cf947cd021b5c0018609db91b9b87bc1db5\"" Dec 13 14:07:53.017028 env[1588]: time="2024-12-13T14:07:53.016979417Z" level=info msg="CreateContainer within sandbox \"64a380bb5d66d9b8c4926f2db7305cf947cd021b5c0018609db91b9b87bc1db5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 14:07:53.031989 kubelet[2426]: E1213 14:07:53.031944 2426 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.6-a-c740448bc5?timeout=10s\": dial tcp 10.200.20.36:6443: connect: connection refused" interval="1.6s" Dec 13 14:07:53.039515 env[1588]: time="2024-12-13T14:07:53.039447248Z" level=info msg="CreateContainer within sandbox \"5439b1f12da5fa98b70dfdd2d06ffdd2904f5a569ca5cce8050da1f15ee6edc5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1d70000a38738899b22d36ad14fd3d1bf819eb89042f6023371e368b764c97f7\"" Dec 13 14:07:53.040322 env[1588]: time="2024-12-13T14:07:53.040282759Z" level=info msg="StartContainer for \"1d70000a38738899b22d36ad14fd3d1bf819eb89042f6023371e368b764c97f7\"" Dec 13 14:07:53.068630 env[1588]: time="2024-12-13T14:07:53.067566385Z" level=info msg="CreateContainer within sandbox \"18a0b39b479993385b8dc9533f8ceffc5ff0eafd25cbe1a01b976412c70c9ae5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"abcce7298cb75743ff24bde6f5442462ae9a86404787f98c4b10a9952dd1b11e\"" Dec 13 14:07:53.068630 env[1588]: time="2024-12-13T14:07:53.068252824Z" level=info msg="StartContainer for \"abcce7298cb75743ff24bde6f5442462ae9a86404787f98c4b10a9952dd1b11e\"" Dec 13 14:07:53.088074 env[1588]: time="2024-12-13T14:07:53.088010655Z" level=info msg="CreateContainer within sandbox \"64a380bb5d66d9b8c4926f2db7305cf947cd021b5c0018609db91b9b87bc1db5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"572dbc15419be8a3ac0c4658b84a8b694e3805f8ad3fbec504563f42ee4be8eb\"" Dec 13 14:07:53.088539 env[1588]: time="2024-12-13T14:07:53.088499626Z" level=info msg="StartContainer for \"572dbc15419be8a3ac0c4658b84a8b694e3805f8ad3fbec504563f42ee4be8eb\"" Dec 13 14:07:53.127024 env[1588]: time="2024-12-13T14:07:53.126420263Z" level=info msg="StartContainer for \"1d70000a38738899b22d36ad14fd3d1bf819eb89042f6023371e368b764c97f7\" returns successfully" Dec 13 14:07:53.144083 kubelet[2426]: W1213 14:07:53.143988 2426 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.20.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.6-a-c740448bc5&limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Dec 13 14:07:53.144083 kubelet[2426]: E1213 14:07:53.144049 2426 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.6-a-c740448bc5&limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Dec 13 14:07:53.189459 env[1588]: time="2024-12-13T14:07:53.189330861Z" level=info msg="StartContainer for \"572dbc15419be8a3ac0c4658b84a8b694e3805f8ad3fbec504563f42ee4be8eb\" returns successfully" Dec 13 14:07:53.192858 env[1588]: time="2024-12-13T14:07:53.192814455Z" level=info msg="StartContainer for \"abcce7298cb75743ff24bde6f5442462ae9a86404787f98c4b10a9952dd1b11e\" returns successfully" Dec 13 14:07:53.217467 kubelet[2426]: I1213 14:07:53.217147 2426 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-c740448bc5" Dec 13 14:07:53.217648 kubelet[2426]: E1213 14:07:53.217575 2426 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.36:6443/api/v1/nodes\": dial tcp 10.200.20.36:6443: connect: connection refused" node="ci-3510.3.6-a-c740448bc5" Dec 13 14:07:54.820114 kubelet[2426]: I1213 14:07:54.820087 2426 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-c740448bc5" Dec 13 14:07:55.133977 kubelet[2426]: E1213 14:07:55.133872 2426 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.6-a-c740448bc5\" not found" node="ci-3510.3.6-a-c740448bc5" Dec 13 14:07:55.216397 kubelet[2426]: I1213 14:07:55.216366 2426 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510.3.6-a-c740448bc5" Dec 13 14:07:55.609408 kubelet[2426]: I1213 14:07:55.609368 2426 apiserver.go:52] "Watching apiserver" Dec 13 14:07:55.639699 kubelet[2426]: I1213 14:07:55.639659 2426 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 14:07:56.677102 kubelet[2426]: W1213 14:07:56.677066 2426 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 14:07:58.097643 systemd[1]: Reloading. Dec 13 14:07:58.177747 /usr/lib/systemd/system-generators/torcx-generator[2718]: time="2024-12-13T14:07:58Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:07:58.177779 /usr/lib/systemd/system-generators/torcx-generator[2718]: time="2024-12-13T14:07:58Z" level=info msg="torcx already run" Dec 13 14:07:58.259003 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:07:58.259174 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:07:58.277162 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:07:58.375149 systemd[1]: Stopping kubelet.service... Dec 13 14:07:58.391965 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:07:58.392271 systemd[1]: Stopped kubelet.service. Dec 13 14:07:58.391000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:07:58.394185 systemd[1]: Starting kubelet.service... Dec 13 14:07:58.397525 kernel: kauditd_printk_skb: 46 callbacks suppressed Dec 13 14:07:58.397607 kernel: audit: type=1131 audit(1734098878.391:238): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:07:58.524971 systemd[1]: Started kubelet.service. Dec 13 14:07:58.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:07:58.545935 kernel: audit: type=1130 audit(1734098878.524:239): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:07:58.608060 kubelet[2790]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:07:58.608060 kubelet[2790]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:07:58.608060 kubelet[2790]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:07:58.608542 kubelet[2790]: I1213 14:07:58.608128 2790 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:07:58.621925 kubelet[2790]: I1213 14:07:58.621881 2790 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 14:07:58.621925 kubelet[2790]: I1213 14:07:58.621921 2790 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:07:58.622150 kubelet[2790]: I1213 14:07:58.622130 2790 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 14:07:58.623852 kubelet[2790]: I1213 14:07:58.623831 2790 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 14:07:58.626159 kubelet[2790]: I1213 14:07:58.625613 2790 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:07:58.631351 kubelet[2790]: I1213 14:07:58.631332 2790 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:07:58.631881 kubelet[2790]: I1213 14:07:58.631867 2790 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:07:58.632151 kubelet[2790]: I1213 14:07:58.632133 2790 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:07:58.632266 kubelet[2790]: I1213 14:07:58.632256 2790 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:07:58.632315 kubelet[2790]: I1213 14:07:58.632307 2790 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:07:58.632403 kubelet[2790]: I1213 14:07:58.632394 2790 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:07:58.632557 kubelet[2790]: I1213 14:07:58.632546 2790 kubelet.go:396] "Attempting to sync node with API server" Dec 13 14:07:58.632643 kubelet[2790]: I1213 14:07:58.632633 2790 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:07:58.632724 kubelet[2790]: I1213 14:07:58.632714 2790 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:07:58.632784 kubelet[2790]: I1213 14:07:58.632775 2790 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:07:58.648025 kubelet[2790]: I1213 14:07:58.647992 2790 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:07:58.648197 kubelet[2790]: I1213 14:07:58.648180 2790 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:07:58.648627 kubelet[2790]: I1213 14:07:58.648583 2790 server.go:1256] "Started kubelet" Dec 13 14:07:58.649000 audit[2790]: AVC avc: denied { mac_admin } for pid=2790 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:07:58.671327 kubelet[2790]: I1213 14:07:58.650480 2790 kubelet.go:1417] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Dec 13 14:07:58.671327 kubelet[2790]: I1213 14:07:58.650510 2790 kubelet.go:1421] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Dec 13 14:07:58.671327 kubelet[2790]: I1213 14:07:58.650544 2790 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:07:58.671327 kubelet[2790]: I1213 14:07:58.655487 2790 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:07:58.671327 kubelet[2790]: I1213 14:07:58.656186 2790 server.go:461] "Adding debug handlers to kubelet server" Dec 13 14:07:58.671327 kubelet[2790]: I1213 14:07:58.657154 2790 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:07:58.671327 kubelet[2790]: I1213 14:07:58.657305 2790 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:07:58.671327 kubelet[2790]: I1213 14:07:58.658610 2790 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:07:58.671327 kubelet[2790]: I1213 14:07:58.660153 2790 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 14:07:58.671327 kubelet[2790]: I1213 14:07:58.660325 2790 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 14:07:58.671327 kubelet[2790]: I1213 14:07:58.666933 2790 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:07:58.671327 kubelet[2790]: I1213 14:07:58.667727 2790 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:07:58.671327 kubelet[2790]: I1213 14:07:58.667750 2790 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:07:58.671327 kubelet[2790]: I1213 14:07:58.667764 2790 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 14:07:58.671327 kubelet[2790]: E1213 14:07:58.667806 2790 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:07:58.690689 kernel: audit: type=1400 audit(1734098878.649:240): avc: denied { mac_admin } for pid=2790 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:07:58.690780 kernel: audit: type=1401 audit(1734098878.649:240): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 14:07:58.649000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 14:07:58.691286 kubelet[2790]: I1213 14:07:58.691264 2790 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:07:58.691493 kubelet[2790]: I1213 14:07:58.691474 2790 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:07:58.649000 audit[2790]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000bdf2c0 a1=4000b38840 a2=4000bdf290 a3=25 items=0 ppid=1 pid=2790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:07:58.710811 kubelet[2790]: I1213 14:07:58.703844 2790 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:07:58.722372 kubelet[2790]: E1213 14:07:58.722347 2790 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:07:58.724299 kernel: audit: type=1300 audit(1734098878.649:240): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000bdf2c0 a1=4000b38840 a2=4000bdf290 a3=25 items=0 ppid=1 pid=2790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:07:58.724398 kernel: audit: type=1327 audit(1734098878.649:240): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 14:07:58.649000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 14:07:58.649000 audit[2790]: AVC avc: denied { mac_admin } for pid=2790 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:07:58.765982 kernel: audit: type=1400 audit(1734098878.649:241): avc: denied { mac_admin } for pid=2790 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:07:58.766923 kernel: audit: type=1401 audit(1734098878.649:241): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 14:07:58.649000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 14:07:58.768065 kubelet[2790]: E1213 14:07:58.768034 2790 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 14:07:58.649000 audit[2790]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000cb4dc0 a1=4000b38858 a2=4000bdf350 a3=25 items=0 ppid=1 pid=2790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:07:58.776414 kubelet[2790]: I1213 14:07:58.776392 2790 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-c740448bc5" Dec 13 14:07:58.802608 kernel: audit: type=1300 audit(1734098878.649:241): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000cb4dc0 a1=4000b38858 a2=4000bdf350 a3=25 items=0 ppid=1 pid=2790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:07:58.803309 kernel: audit: type=1327 audit(1734098878.649:241): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 14:07:58.649000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 14:07:58.830504 kubelet[2790]: I1213 14:07:58.830479 2790 kubelet_node_status.go:112] "Node was previously registered" node="ci-3510.3.6-a-c740448bc5" Dec 13 14:07:58.831204 kubelet[2790]: I1213 14:07:58.831187 2790 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510.3.6-a-c740448bc5" Dec 13 14:07:58.847814 kubelet[2790]: I1213 14:07:58.847791 2790 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:07:58.847963 kubelet[2790]: I1213 14:07:58.847949 2790 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:07:58.848023 kubelet[2790]: I1213 14:07:58.848015 2790 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:07:58.848222 kubelet[2790]: I1213 14:07:58.848211 2790 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 14:07:58.848302 kubelet[2790]: I1213 14:07:58.848292 2790 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 14:07:58.848392 kubelet[2790]: I1213 14:07:58.848383 2790 policy_none.go:49] "None policy: Start" Dec 13 14:07:58.849113 kubelet[2790]: I1213 14:07:58.849098 2790 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:07:58.849224 kubelet[2790]: I1213 14:07:58.849213 2790 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:07:58.849478 kubelet[2790]: I1213 14:07:58.849467 2790 state_mem.go:75] "Updated machine memory state" Dec 13 14:07:58.850749 kubelet[2790]: I1213 14:07:58.850733 2790 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:07:58.850000 audit[2790]: AVC avc: denied { mac_admin } for pid=2790 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:07:58.850000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 14:07:58.850000 audit[2790]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000dd7d10 a1=4000daf3c8 a2=4000dd7ce0 a3=25 items=0 ppid=1 pid=2790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:07:58.850000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 14:07:58.851177 kubelet[2790]: I1213 14:07:58.851160 2790 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Dec 13 14:07:58.853431 kubelet[2790]: I1213 14:07:58.852485 2790 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:07:58.969214 kubelet[2790]: I1213 14:07:58.969176 2790 topology_manager.go:215] "Topology Admit Handler" podUID="cecc3f31cfb24345f56217dfb5322223" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.6-a-c740448bc5" Dec 13 14:07:58.969364 kubelet[2790]: I1213 14:07:58.969304 2790 topology_manager.go:215] "Topology Admit Handler" podUID="44de71fcc57eddc926616b80695a594a" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.6-a-c740448bc5" Dec 13 14:07:58.969874 kubelet[2790]: I1213 14:07:58.969854 2790 topology_manager.go:215] "Topology Admit Handler" podUID="04fd05d10854301aa332942f541428db" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.6-a-c740448bc5" Dec 13 14:07:58.985701 kubelet[2790]: W1213 14:07:58.985662 2790 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 14:07:58.988253 kubelet[2790]: W1213 14:07:58.988213 2790 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 14:07:58.988380 kubelet[2790]: E1213 14:07:58.988300 2790 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.6-a-c740448bc5\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.6-a-c740448bc5" Dec 13 14:07:58.988414 kubelet[2790]: W1213 14:07:58.988383 2790 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 14:07:59.062713 kubelet[2790]: I1213 14:07:59.062671 2790 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cecc3f31cfb24345f56217dfb5322223-k8s-certs\") pod \"kube-apiserver-ci-3510.3.6-a-c740448bc5\" (UID: \"cecc3f31cfb24345f56217dfb5322223\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-c740448bc5" Dec 13 14:07:59.062855 kubelet[2790]: I1213 14:07:59.062727 2790 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/44de71fcc57eddc926616b80695a594a-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.6-a-c740448bc5\" (UID: \"44de71fcc57eddc926616b80695a594a\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-c740448bc5" Dec 13 14:07:59.062855 kubelet[2790]: I1213 14:07:59.062792 2790 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/44de71fcc57eddc926616b80695a594a-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.6-a-c740448bc5\" (UID: \"44de71fcc57eddc926616b80695a594a\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-c740448bc5" Dec 13 14:07:59.062855 kubelet[2790]: I1213 14:07:59.062816 2790 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/44de71fcc57eddc926616b80695a594a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.6-a-c740448bc5\" (UID: \"44de71fcc57eddc926616b80695a594a\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-c740448bc5" Dec 13 14:07:59.062855 kubelet[2790]: I1213 14:07:59.062854 2790 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/04fd05d10854301aa332942f541428db-kubeconfig\") pod \"kube-scheduler-ci-3510.3.6-a-c740448bc5\" (UID: \"04fd05d10854301aa332942f541428db\") " pod="kube-system/kube-scheduler-ci-3510.3.6-a-c740448bc5" Dec 13 14:07:59.062956 kubelet[2790]: I1213 14:07:59.062875 2790 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cecc3f31cfb24345f56217dfb5322223-ca-certs\") pod \"kube-apiserver-ci-3510.3.6-a-c740448bc5\" (UID: \"cecc3f31cfb24345f56217dfb5322223\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-c740448bc5" Dec 13 14:07:59.062956 kubelet[2790]: I1213 14:07:59.062922 2790 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cecc3f31cfb24345f56217dfb5322223-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.6-a-c740448bc5\" (UID: \"cecc3f31cfb24345f56217dfb5322223\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-c740448bc5" Dec 13 14:07:59.062956 kubelet[2790]: I1213 14:07:59.062944 2790 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/44de71fcc57eddc926616b80695a594a-ca-certs\") pod \"kube-controller-manager-ci-3510.3.6-a-c740448bc5\" (UID: \"44de71fcc57eddc926616b80695a594a\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-c740448bc5" Dec 13 14:07:59.063045 kubelet[2790]: I1213 14:07:59.062963 2790 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/44de71fcc57eddc926616b80695a594a-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.6-a-c740448bc5\" (UID: \"44de71fcc57eddc926616b80695a594a\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-c740448bc5" Dec 13 14:07:59.635049 kubelet[2790]: I1213 14:07:59.634993 2790 apiserver.go:52] "Watching apiserver" Dec 13 14:07:59.661218 kubelet[2790]: I1213 14:07:59.661171 2790 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 14:07:59.758714 kubelet[2790]: W1213 14:07:59.758685 2790 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 14:07:59.758944 kubelet[2790]: E1213 14:07:59.758926 2790 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.6-a-c740448bc5\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.6-a-c740448bc5" Dec 13 14:07:59.767847 kubelet[2790]: W1213 14:07:59.767817 2790 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 14:07:59.768272 kubelet[2790]: E1213 14:07:59.768254 2790 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.6-a-c740448bc5\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.6-a-c740448bc5" Dec 13 14:07:59.837346 kubelet[2790]: I1213 14:07:59.837310 2790 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.6-a-c740448bc5" podStartSLOduration=3.837249274 podStartE2EDuration="3.837249274s" podCreationTimestamp="2024-12-13 14:07:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:07:59.81379958 +0000 UTC m=+1.282935964" watchObservedRunningTime="2024-12-13 14:07:59.837249274 +0000 UTC m=+1.306385698" Dec 13 14:07:59.837756 kubelet[2790]: I1213 14:07:59.837715 2790 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.6-a-c740448bc5" podStartSLOduration=1.837683892 podStartE2EDuration="1.837683892s" podCreationTimestamp="2024-12-13 14:07:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:07:59.837681572 +0000 UTC m=+1.306817956" watchObservedRunningTime="2024-12-13 14:07:59.837683892 +0000 UTC m=+1.306820276" Dec 13 14:07:59.885483 kubelet[2790]: I1213 14:07:59.885362 2790 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.6-a-c740448bc5" podStartSLOduration=1.885286243 podStartE2EDuration="1.885286243s" podCreationTimestamp="2024-12-13 14:07:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:07:59.863103843 +0000 UTC m=+1.332240227" watchObservedRunningTime="2024-12-13 14:07:59.885286243 +0000 UTC m=+1.354422587" Dec 13 14:08:03.521968 sudo[2056]: pam_unix(sudo:session): session closed for user root Dec 13 14:08:03.520000 audit[2056]: USER_END pid=2056 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:08:03.526927 kernel: kauditd_printk_skb: 4 callbacks suppressed Dec 13 14:08:03.548052 kernel: audit: type=1106 audit(1734098883.520:243): pid=2056 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:08:03.548141 kernel: audit: type=1104 audit(1734098883.520:244): pid=2056 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:08:03.520000 audit[2056]: CRED_DISP pid=2056 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:08:03.600372 sshd[2052]: pam_unix(sshd:session): session closed for user core Dec 13 14:08:03.599000 audit[2052]: USER_END pid=2052 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:08:03.603712 systemd[1]: sshd@6-10.200.20.36:22-10.200.16.10:51032.service: Deactivated successfully. Dec 13 14:08:03.604546 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 14:08:03.626728 systemd-logind[1575]: Session 9 logged out. Waiting for processes to exit. Dec 13 14:08:03.600000 audit[2052]: CRED_DISP pid=2052 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:08:03.647778 kernel: audit: type=1106 audit(1734098883.599:245): pid=2052 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:08:03.647835 kernel: audit: type=1104 audit(1734098883.600:246): pid=2052 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:08:03.648095 systemd-logind[1575]: Removed session 9. Dec 13 14:08:03.602000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.36:22-10.200.16.10:51032 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:03.670796 kernel: audit: type=1131 audit(1734098883.602:247): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.36:22-10.200.16.10:51032 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:08:12.701273 kubelet[2790]: I1213 14:08:12.701244 2790 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 14:08:12.702144 env[1588]: time="2024-12-13T14:08:12.702061774Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 14:08:12.702534 kubelet[2790]: I1213 14:08:12.702516 2790 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 14:08:12.798668 kubelet[2790]: I1213 14:08:12.798627 2790 topology_manager.go:215] "Topology Admit Handler" podUID="74d52a1e-27ed-44d9-a613-e88d152eb6f3" podNamespace="kube-system" podName="kube-proxy-6fdml" Dec 13 14:08:12.841794 kubelet[2790]: I1213 14:08:12.841755 2790 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/74d52a1e-27ed-44d9-a613-e88d152eb6f3-xtables-lock\") pod \"kube-proxy-6fdml\" (UID: \"74d52a1e-27ed-44d9-a613-e88d152eb6f3\") " pod="kube-system/kube-proxy-6fdml" Dec 13 14:08:12.841985 kubelet[2790]: I1213 14:08:12.841973 2790 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/74d52a1e-27ed-44d9-a613-e88d152eb6f3-lib-modules\") pod \"kube-proxy-6fdml\" (UID: \"74d52a1e-27ed-44d9-a613-e88d152eb6f3\") " pod="kube-system/kube-proxy-6fdml" Dec 13 14:08:12.842064 kubelet[2790]: I1213 14:08:12.842053 2790 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/74d52a1e-27ed-44d9-a613-e88d152eb6f3-kube-proxy\") pod \"kube-proxy-6fdml\" (UID: \"74d52a1e-27ed-44d9-a613-e88d152eb6f3\") " pod="kube-system/kube-proxy-6fdml" Dec 13 14:08:12.842139 kubelet[2790]: I1213 14:08:12.842128 2790 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfscq\" (UniqueName: \"kubernetes.io/projected/74d52a1e-27ed-44d9-a613-e88d152eb6f3-kube-api-access-qfscq\") pod \"kube-proxy-6fdml\" (UID: \"74d52a1e-27ed-44d9-a613-e88d152eb6f3\") " pod="kube-system/kube-proxy-6fdml" Dec 13 14:08:13.102789 env[1588]: time="2024-12-13T14:08:13.102675070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6fdml,Uid:74d52a1e-27ed-44d9-a613-e88d152eb6f3,Namespace:kube-system,Attempt:0,}" Dec 13 14:08:13.141675 env[1588]: time="2024-12-13T14:08:13.139994439Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:08:13.141675 env[1588]: time="2024-12-13T14:08:13.140030357Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:08:13.141675 env[1588]: time="2024-12-13T14:08:13.140040637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:08:13.141675 env[1588]: time="2024-12-13T14:08:13.140155992Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f2782bd177da76ac6ddb7c732de9692691d711bd65b675bac5fea7d9037c191f pid=2876 runtime=io.containerd.runc.v2 Dec 13 14:08:13.170907 kubelet[2790]: I1213 14:08:13.170860 2790 topology_manager.go:215] "Topology Admit Handler" podUID="53c9e5f5-0533-441e-96ec-6851ffdde7fa" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-ckv88" Dec 13 14:08:13.234194 env[1588]: time="2024-12-13T14:08:13.234142347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6fdml,Uid:74d52a1e-27ed-44d9-a613-e88d152eb6f3,Namespace:kube-system,Attempt:0,} returns sandbox id \"f2782bd177da76ac6ddb7c732de9692691d711bd65b675bac5fea7d9037c191f\"" Dec 13 14:08:13.239654 env[1588]: time="2024-12-13T14:08:13.239344256Z" level=info msg="CreateContainer within sandbox \"f2782bd177da76ac6ddb7c732de9692691d711bd65b675bac5fea7d9037c191f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 14:08:13.246335 kubelet[2790]: I1213 14:08:13.246296 2790 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qddlp\" (UniqueName: \"kubernetes.io/projected/53c9e5f5-0533-441e-96ec-6851ffdde7fa-kube-api-access-qddlp\") pod \"tigera-operator-c7ccbd65-ckv88\" (UID: \"53c9e5f5-0533-441e-96ec-6851ffdde7fa\") " pod="tigera-operator/tigera-operator-c7ccbd65-ckv88" Dec 13 14:08:13.246481 kubelet[2790]: I1213 14:08:13.246363 2790 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/53c9e5f5-0533-441e-96ec-6851ffdde7fa-var-lib-calico\") pod \"tigera-operator-c7ccbd65-ckv88\" (UID: \"53c9e5f5-0533-441e-96ec-6851ffdde7fa\") " pod="tigera-operator/tigera-operator-c7ccbd65-ckv88" Dec 13 14:08:13.280388 env[1588]: time="2024-12-13T14:08:13.280330357Z" level=info msg="CreateContainer within sandbox \"f2782bd177da76ac6ddb7c732de9692691d711bd65b675bac5fea7d9037c191f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"df4a6658bee4f16bdafb690a5c0d6b1ce47701fcfe3b50535ef0eb10c843f04c\"" Dec 13 14:08:13.282480 env[1588]: time="2024-12-13T14:08:13.281348355Z" level=info msg="StartContainer for \"df4a6658bee4f16bdafb690a5c0d6b1ce47701fcfe3b50535ef0eb10c843f04c\"" Dec 13 14:08:13.337248 env[1588]: time="2024-12-13T14:08:13.336947504Z" level=info msg="StartContainer for \"df4a6658bee4f16bdafb690a5c0d6b1ce47701fcfe3b50535ef0eb10c843f04c\" returns successfully" Dec 13 14:08:13.405000 audit[2972]: NETFILTER_CFG table=mangle:41 family=2 entries=1 op=nft_register_chain pid=2972 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:08:13.405000 audit[2972]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd3a53cf0 a2=0 a3=1 items=0 ppid=2930 pid=2972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:13.445701 kernel: audit: type=1325 audit(1734098893.405:248): table=mangle:41 family=2 entries=1 op=nft_register_chain pid=2972 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:08:13.445857 kernel: audit: type=1300 audit(1734098893.405:248): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd3a53cf0 a2=0 a3=1 items=0 ppid=2930 pid=2972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:13.405000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 13 14:08:13.461503 kernel: audit: type=1327 audit(1734098893.405:248): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 13 14:08:13.411000 audit[2971]: NETFILTER_CFG table=mangle:42 family=10 entries=1 op=nft_register_chain pid=2971 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:08:13.475274 kernel: audit: type=1325 audit(1734098893.411:249): table=mangle:42 family=10 entries=1 op=nft_register_chain pid=2971 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:08:13.475673 env[1588]: time="2024-12-13T14:08:13.475621929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-ckv88,Uid:53c9e5f5-0533-441e-96ec-6851ffdde7fa,Namespace:tigera-operator,Attempt:0,}" Dec 13 14:08:13.411000 audit[2971]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc50849c0 a2=0 a3=1 items=0 ppid=2930 pid=2971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:13.502328 kernel: audit: type=1300 audit(1734098893.411:249): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc50849c0 a2=0 a3=1 items=0 ppid=2930 pid=2971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:13.411000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 13 14:08:13.516124 kernel: audit: type=1327 audit(1734098893.411:249): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 13 14:08:13.414000 audit[2973]: NETFILTER_CFG table=nat:43 family=2 entries=1 op=nft_register_chain pid=2973 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:08:13.530259 kernel: audit: type=1325 audit(1734098893.414:250): table=nat:43 family=2 entries=1 op=nft_register_chain pid=2973 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:08:13.414000 audit[2973]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff77fcc90 a2=0 a3=1 items=0 ppid=2930 pid=2973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:13.556228 kernel: audit: type=1300 audit(1734098893.414:250): arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff77fcc90 a2=0 a3=1 items=0 ppid=2930 pid=2973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:13.414000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 13 14:08:13.570918 kernel: audit: type=1327 audit(1734098893.414:250): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 13 14:08:13.571067 kernel: audit: type=1325 audit(1734098893.415:251): table=filter:44 family=2 entries=1 op=nft_register_chain pid=2974 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:08:13.415000 audit[2974]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2974 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:08:13.415000 audit[2974]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc404aa80 a2=0 a3=1 items=0 ppid=2930 pid=2974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:13.415000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 13 14:08:13.421000 audit[2975]: NETFILTER_CFG table=nat:45 family=10 entries=1 op=nft_register_chain pid=2975 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:08:13.421000 audit[2975]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd6d27830 a2=0 a3=1 items=0 ppid=2930 pid=2975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:13.421000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 13 14:08:13.448000 audit[2976]: NETFILTER_CFG table=filter:46 family=10 entries=1 op=nft_register_chain pid=2976 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:08:13.448000 audit[2976]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff7c4e8d0 a2=0 a3=1 items=0 ppid=2930 pid=2976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:13.448000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 13 14:08:13.501000 audit[2977]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2977 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:08:13.501000 audit[2977]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=fffffaadcb00 a2=0 a3=1 items=0 ppid=2930 pid=2977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:13.501000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 13 14:08:13.505000 audit[2979]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2979 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:08:13.505000 audit[2979]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffe4225c70 a2=0 a3=1 items=0 ppid=2930 pid=2979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:13.505000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Dec 13 14:08:13.509000 audit[2982]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_rule pid=2982 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:08:13.509000 audit[2982]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffffbc74c20 a2=0 a3=1 items=0 ppid=2930 pid=2982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:13.509000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Dec 13 14:08:13.510000 audit[2983]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_chain pid=2983 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:08:13.510000 audit[2983]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe33813a0 a2=0 a3=1 items=0 ppid=2930 pid=2983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:13.510000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 13 14:08:13.513000 audit[2985]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2985 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:08:13.513000 audit[2985]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc10b8120 a2=0 a3=1 items=0 ppid=2930 pid=2985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:13.513000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 13 14:08:13.514000 audit[2986]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2986 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:08:13.514000 audit[2986]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffc0e03c0 a2=0 a3=1 items=0 ppid=2930 pid=2986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:13.514000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 13 14:08:13.517000 audit[2988]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2988 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:08:13.517000 audit[2988]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffd5a35730 a2=0 a3=1 items=0 ppid=2930 pid=2988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:13.517000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 13 14:08:13.521000 audit[2991]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_rule pid=2991 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:08:13.521000 audit[2991]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffd1daf100 a2=0 a3=1 items=0 ppid=2930 pid=2991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:13.521000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Dec 13 14:08:13.522000 audit[2992]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_chain pid=2992 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:08:13.522000 audit[2992]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc5047b10 a2=0 a3=1 items=0 ppid=2930 pid=2992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:13.522000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 13 14:08:13.525000 audit[2994]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2994 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:08:13.525000 audit[2994]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe6f0f2e0 a2=0 a3=1 items=0 ppid=2930 pid=2994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:13.525000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 13 14:08:13.526000 audit[2995]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_chain pid=2995 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:08:13.526000 audit[2995]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcf427db0 a2=0 a3=1 items=0 ppid=2930 pid=2995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:13.526000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 13 14:08:13.528000 audit[2997]: NETFILTER_CFG table=filter:58 family=2 entries=1 op=nft_register_rule pid=2997 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:08:13.528000 audit[2997]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe16ebee0 a2=0 a3=1 items=0 ppid=2930 pid=2997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:13.528000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 13 14:08:13.533000 audit[3000]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_rule pid=3000 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:08:13.533000 audit[3000]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe4d702f0 a2=0 a3=1 items=0 ppid=2930 pid=3000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:13.533000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 13 14:08:13.588000 audit[3003]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_rule pid=3003 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:08:13.588000 audit[3003]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffdbb9ba70 a2=0 a3=1 items=0 ppid=2930 pid=3003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:13.588000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 13 14:08:13.589000 audit[3004]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=3004 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:08:13.589000 audit[3004]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd380ebf0 a2=0 a3=1 items=0 ppid=2930 pid=3004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:13.589000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 13 14:08:13.592000 audit[3006]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=3006 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:08:13.592000 audit[3006]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffd981d700 a2=0 a3=1 items=0 ppid=2930 pid=3006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:13.592000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 14:08:13.595000 audit[3009]: NETFILTER_CFG table=nat:63 family=2 entries=1 op=nft_register_rule pid=3009 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:08:13.595000 audit[3009]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd6524620 a2=0 a3=1 items=0 ppid=2930 pid=3009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:13.595000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 14:08:13.596000 audit[3010]: NETFILTER_CFG table=nat:64 family=2 entries=1 op=nft_register_chain pid=3010 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:08:13.596000 audit[3010]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff09f7020 a2=0 a3=1 items=0 ppid=2930 pid=3010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:13.596000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 13 14:08:13.598000 audit[3012]: NETFILTER_CFG table=nat:65 family=2 entries=1 op=nft_register_rule pid=3012 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:08:13.598000 audit[3012]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffe6312030 a2=0 a3=1 items=0 ppid=2930 pid=3012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:13.598000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 13 14:08:13.608488 env[1588]: time="2024-12-13T14:08:13.608408753Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:08:13.608736 env[1588]: time="2024-12-13T14:08:13.608711181Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:08:13.608843 env[1588]: time="2024-12-13T14:08:13.608822936Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:08:13.609163 env[1588]: time="2024-12-13T14:08:13.609131843Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3b23e87fa6e847326bec86005d5a8cbffbdb561e7d5459de3062c4e5dd397a40 pid=3026 runtime=io.containerd.runc.v2 Dec 13 14:08:13.630000 audit[3031]: NETFILTER_CFG table=filter:66 family=2 entries=8 op=nft_register_rule pid=3031 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:08:13.630000 audit[3031]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5164 a0=3 a1=ffffdec8e560 a2=0 a3=1 items=0 ppid=2930 pid=3031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:13.630000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:08:13.642000 audit[3031]: NETFILTER_CFG table=nat:67 family=2 entries=14 op=nft_register_chain pid=3031 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:08:13.642000 audit[3031]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=ffffdec8e560 a2=0 a3=1 items=0 ppid=2930 pid=3031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:13.642000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:08:13.645000 audit[3060]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=3060 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:08:13.645000 audit[3060]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffe520fef0 a2=0 a3=1 items=0 ppid=2930 pid=3060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:13.645000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 13 14:08:13.651000 audit[3062]: NETFILTER_CFG table=filter:69 family=10 entries=2 op=nft_register_chain pid=3062 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:08:13.651000 audit[3062]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffdd527320 a2=0 a3=1 items=0 ppid=2930 pid=3062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:13.651000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Dec 13 14:08:13.658808 env[1588]: time="2024-12-13T14:08:13.658762954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-ckv88,Uid:53c9e5f5-0533-441e-96ec-6851ffdde7fa,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"3b23e87fa6e847326bec86005d5a8cbffbdb561e7d5459de3062c4e5dd397a40\"" Dec 13 14:08:13.658000 audit[3071]: NETFILTER_CFG table=filter:70 family=10 entries=2 op=nft_register_chain pid=3071 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:08:13.658000 audit[3071]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffd59074e0 a2=0 a3=1 items=0 ppid=2930 pid=3071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:13.658000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Dec 13 14:08:13.661000 audit[3072]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_chain pid=3072 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:08:13.661000 audit[3072]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffe405cf0 a2=0 a3=1 items=0 ppid=2930 pid=3072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:13.661000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 13 14:08:13.665061 env[1588]: time="2024-12-13T14:08:13.664689714Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Dec 13 14:08:13.664000 audit[3074]: NETFILTER_CFG table=filter:72 family=10 entries=1 op=nft_register_rule pid=3074 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:08:13.664000 audit[3074]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffffc071220 a2=0 a3=1 items=0 ppid=2930 pid=3074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:13.664000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 13 14:08:13.665000 audit[3075]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=3075 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:08:13.665000 audit[3075]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe606af10 a2=0 a3=1 items=0 ppid=2930 pid=3075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:13.665000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 13 14:08:13.667000 audit[3077]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=3077 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:08:13.667000 audit[3077]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffc5ef7890 a2=0 a3=1 items=0 ppid=2930 pid=3077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:13.667000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Dec 13 14:08:13.671000 audit[3080]: NETFILTER_CFG table=filter:75 family=10 entries=2 op=nft_register_chain pid=3080 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:08:13.671000 audit[3080]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffd9e022e0 a2=0 a3=1 items=0 ppid=2930 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:13.671000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 13 14:08:13.675000 audit[3081]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_chain pid=3081 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:08:13.675000 audit[3081]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc95becc0 a2=0 a3=1 items=0 ppid=2930 pid=3081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:13.675000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 13 14:08:13.677000 audit[3083]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=3083 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:08:13.677000 audit[3083]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffeb65c470 a2=0 a3=1 items=0 ppid=2930 pid=3083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:13.677000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 13 14:08:13.678000 audit[3084]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_chain pid=3084 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:08:13.678000 audit[3084]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff5659f60 a2=0 a3=1 items=0 ppid=2930 pid=3084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:13.678000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 13 14:08:13.680000 audit[3086]: NETFILTER_CFG table=filter:79 family=10 entries=1 op=nft_register_rule pid=3086 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:08:13.680000 audit[3086]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffebcedbb0 a2=0 a3=1 items=0 ppid=2930 pid=3086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:13.680000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 13 14:08:13.683000 audit[3089]: NETFILTER_CFG table=filter:80 family=10 entries=1 op=nft_register_rule pid=3089 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:08:13.683000 audit[3089]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffffb46bea0 a2=0 a3=1 items=0 ppid=2930 pid=3089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:13.683000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 13 14:08:13.686000 audit[3092]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_rule pid=3092 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:08:13.686000 audit[3092]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc9b6e000 a2=0 a3=1 items=0 ppid=2930 pid=3092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:13.686000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Dec 13 14:08:13.687000 audit[3093]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=3093 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:08:13.687000 audit[3093]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff1bb5de0 a2=0 a3=1 items=0 ppid=2930 pid=3093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:13.687000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 13 14:08:13.689000 audit[3095]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=3095 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:08:13.689000 audit[3095]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffc77d20d0 a2=0 a3=1 items=0 ppid=2930 pid=3095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:13.689000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 14:08:13.692000 audit[3098]: NETFILTER_CFG table=nat:84 family=10 entries=2 op=nft_register_chain pid=3098 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:08:13.692000 audit[3098]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=ffffd7623470 a2=0 a3=1 items=0 ppid=2930 pid=3098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:13.692000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 14:08:13.693000 audit[3099]: NETFILTER_CFG table=nat:85 family=10 entries=1 op=nft_register_chain pid=3099 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:08:13.693000 audit[3099]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdf740260 a2=0 a3=1 items=0 ppid=2930 pid=3099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:13.693000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 13 14:08:13.695000 audit[3101]: NETFILTER_CFG table=nat:86 family=10 entries=2 op=nft_register_chain pid=3101 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:08:13.695000 audit[3101]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=fffffa86c460 a2=0 a3=1 items=0 ppid=2930 pid=3101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:13.695000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 13 14:08:13.696000 audit[3102]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_chain pid=3102 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:08:13.696000 audit[3102]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe8f28470 a2=0 a3=1 items=0 ppid=2930 pid=3102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:13.696000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 13 14:08:13.698000 audit[3104]: NETFILTER_CFG table=filter:88 family=10 entries=1 op=nft_register_rule pid=3104 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:08:13.698000 audit[3104]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffe779dcd0 a2=0 a3=1 items=0 ppid=2930 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:13.698000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 14:08:13.701000 audit[3107]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_rule pid=3107 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:08:13.701000 audit[3107]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffdbc77b30 a2=0 a3=1 items=0 ppid=2930 pid=3107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:13.701000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 14:08:13.703000 audit[3109]: NETFILTER_CFG table=filter:90 family=10 entries=3 op=nft_register_rule pid=3109 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 13 14:08:13.703000 audit[3109]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2004 a0=3 a1=ffffd4503c80 a2=0 a3=1 items=0 ppid=2930 pid=3109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:13.703000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:08:13.704000 audit[3109]: NETFILTER_CFG table=nat:91 family=10 entries=7 op=nft_register_chain pid=3109 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 13 14:08:13.704000 audit[3109]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffd4503c80 a2=0 a3=1 items=0 ppid=2930 pid=3109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:13.704000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:08:13.774894 kubelet[2790]: I1213 14:08:13.774556 2790 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-6fdml" podStartSLOduration=1.774508107 podStartE2EDuration="1.774508107s" podCreationTimestamp="2024-12-13 14:08:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:08:13.774293036 +0000 UTC m=+15.243429420" watchObservedRunningTime="2024-12-13 14:08:13.774508107 +0000 UTC m=+15.243644491" Dec 13 14:08:13.971233 systemd[1]: run-containerd-runc-k8s.io-f2782bd177da76ac6ddb7c732de9692691d711bd65b675bac5fea7d9037c191f-runc.lvzJEY.mount: Deactivated successfully. Dec 13 14:08:15.824956 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3698804383.mount: Deactivated successfully. Dec 13 14:08:16.862233 env[1588]: time="2024-12-13T14:08:16.862176367Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.36.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:08:16.869143 env[1588]: time="2024-12-13T14:08:16.869092100Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:08:16.872946 env[1588]: time="2024-12-13T14:08:16.872910313Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.36.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:08:16.877695 env[1588]: time="2024-12-13T14:08:16.877644930Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:08:16.878491 env[1588]: time="2024-12-13T14:08:16.878457499Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Dec 13 14:08:16.881608 env[1588]: time="2024-12-13T14:08:16.881552380Z" level=info msg="CreateContainer within sandbox \"3b23e87fa6e847326bec86005d5a8cbffbdb561e7d5459de3062c4e5dd397a40\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 13 14:08:16.905132 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3147877767.mount: Deactivated successfully. Dec 13 14:08:16.909727 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3815489802.mount: Deactivated successfully. Dec 13 14:08:16.920001 env[1588]: time="2024-12-13T14:08:16.919946419Z" level=info msg="CreateContainer within sandbox \"3b23e87fa6e847326bec86005d5a8cbffbdb561e7d5459de3062c4e5dd397a40\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"82126c2b0d2d25af87c9c300cccc0109d5fbff38ab54303094844583f130e12c\"" Dec 13 14:08:16.921775 env[1588]: time="2024-12-13T14:08:16.920695590Z" level=info msg="StartContainer for \"82126c2b0d2d25af87c9c300cccc0109d5fbff38ab54303094844583f130e12c\"" Dec 13 14:08:16.977878 env[1588]: time="2024-12-13T14:08:16.977827466Z" level=info msg="StartContainer for \"82126c2b0d2d25af87c9c300cccc0109d5fbff38ab54303094844583f130e12c\" returns successfully" Dec 13 14:08:18.685095 kubelet[2790]: I1213 14:08:18.685052 2790 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-ckv88" podStartSLOduration=2.467196348 podStartE2EDuration="5.68501081s" podCreationTimestamp="2024-12-13 14:08:13 +0000 UTC" firstStartedPulling="2024-12-13 14:08:13.660996903 +0000 UTC m=+15.130133247" lastFinishedPulling="2024-12-13 14:08:16.878811365 +0000 UTC m=+18.347947709" observedRunningTime="2024-12-13 14:08:17.783707245 +0000 UTC m=+19.252843589" watchObservedRunningTime="2024-12-13 14:08:18.68501081 +0000 UTC m=+20.154147194" Dec 13 14:08:20.726638 kernel: kauditd_printk_skb: 143 callbacks suppressed Dec 13 14:08:20.726748 kernel: audit: type=1325 audit(1734098900.710:299): table=filter:92 family=2 entries=15 op=nft_register_rule pid=3149 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:08:20.710000 audit[3149]: NETFILTER_CFG table=filter:92 family=2 entries=15 op=nft_register_rule pid=3149 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:08:20.710000 audit[3149]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=ffffd6402c70 a2=0 a3=1 items=0 ppid=2930 pid=3149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:20.756357 kernel: audit: type=1300 audit(1734098900.710:299): arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=ffffd6402c70 a2=0 a3=1 items=0 ppid=2930 pid=3149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:20.710000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:08:20.769524 kernel: audit: type=1327 audit(1734098900.710:299): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:08:20.771000 audit[3149]: NETFILTER_CFG table=nat:93 family=2 entries=12 op=nft_register_rule pid=3149 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:08:20.771000 audit[3149]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd6402c70 a2=0 a3=1 items=0 ppid=2930 pid=3149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:20.812537 kernel: audit: type=1325 audit(1734098900.771:300): table=nat:93 family=2 entries=12 op=nft_register_rule pid=3149 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:08:20.812709 kernel: audit: type=1300 audit(1734098900.771:300): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd6402c70 a2=0 a3=1 items=0 ppid=2930 pid=3149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:20.771000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:08:20.826429 kernel: audit: type=1327 audit(1734098900.771:300): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:08:20.840000 audit[3151]: NETFILTER_CFG table=filter:94 family=2 entries=16 op=nft_register_rule pid=3151 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:08:20.840000 audit[3151]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=fffff4597c80 a2=0 a3=1 items=0 ppid=2930 pid=3151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:20.881693 kernel: audit: type=1325 audit(1734098900.840:301): table=filter:94 family=2 entries=16 op=nft_register_rule pid=3151 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:08:20.881790 kernel: audit: type=1300 audit(1734098900.840:301): arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=fffff4597c80 a2=0 a3=1 items=0 ppid=2930 pid=3151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:20.840000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:08:20.895431 kernel: audit: type=1327 audit(1734098900.840:301): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:08:20.898000 audit[3151]: NETFILTER_CFG table=nat:95 family=2 entries=12 op=nft_register_rule pid=3151 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:08:20.898000 audit[3151]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff4597c80 a2=0 a3=1 items=0 ppid=2930 pid=3151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:20.898000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:08:20.913619 kernel: audit: type=1325 audit(1734098900.898:302): table=nat:95 family=2 entries=12 op=nft_register_rule pid=3151 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:08:21.146409 kubelet[2790]: I1213 14:08:21.146295 2790 topology_manager.go:215] "Topology Admit Handler" podUID="1f6fbbd9-91a7-4310-aff8-ed4b3231fc57" podNamespace="calico-system" podName="calico-typha-5c9bbbd468-lfbs8" Dec 13 14:08:21.238116 kubelet[2790]: I1213 14:08:21.238071 2790 topology_manager.go:215] "Topology Admit Handler" podUID="77348368-5bd3-4f95-b97c-27347e1ae607" podNamespace="calico-system" podName="calico-node-bnx9h" Dec 13 14:08:21.297592 kubelet[2790]: I1213 14:08:21.297558 2790 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1f6fbbd9-91a7-4310-aff8-ed4b3231fc57-typha-certs\") pod \"calico-typha-5c9bbbd468-lfbs8\" (UID: \"1f6fbbd9-91a7-4310-aff8-ed4b3231fc57\") " pod="calico-system/calico-typha-5c9bbbd468-lfbs8" Dec 13 14:08:21.297850 kubelet[2790]: I1213 14:08:21.297835 2790 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f6fbbd9-91a7-4310-aff8-ed4b3231fc57-tigera-ca-bundle\") pod \"calico-typha-5c9bbbd468-lfbs8\" (UID: \"1f6fbbd9-91a7-4310-aff8-ed4b3231fc57\") " pod="calico-system/calico-typha-5c9bbbd468-lfbs8" Dec 13 14:08:21.297955 kubelet[2790]: I1213 14:08:21.297944 2790 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzgvh\" (UniqueName: \"kubernetes.io/projected/1f6fbbd9-91a7-4310-aff8-ed4b3231fc57-kube-api-access-pzgvh\") pod \"calico-typha-5c9bbbd468-lfbs8\" (UID: \"1f6fbbd9-91a7-4310-aff8-ed4b3231fc57\") " pod="calico-system/calico-typha-5c9bbbd468-lfbs8" Dec 13 14:08:21.380449 kubelet[2790]: I1213 14:08:21.380410 2790 topology_manager.go:215] "Topology Admit Handler" podUID="e35f131e-6a5b-4f9b-80ee-8f99f7186350" podNamespace="calico-system" podName="csi-node-driver-9jgbk" Dec 13 14:08:21.380947 kubelet[2790]: E1213 14:08:21.380925 2790 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9jgbk" podUID="e35f131e-6a5b-4f9b-80ee-8f99f7186350" Dec 13 14:08:21.399258 kubelet[2790]: I1213 14:08:21.399142 2790 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/77348368-5bd3-4f95-b97c-27347e1ae607-var-lib-calico\") pod \"calico-node-bnx9h\" (UID: \"77348368-5bd3-4f95-b97c-27347e1ae607\") " pod="calico-system/calico-node-bnx9h" Dec 13 14:08:21.399609 kubelet[2790]: I1213 14:08:21.399372 2790 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/77348368-5bd3-4f95-b97c-27347e1ae607-lib-modules\") pod \"calico-node-bnx9h\" (UID: \"77348368-5bd3-4f95-b97c-27347e1ae607\") " pod="calico-system/calico-node-bnx9h" Dec 13 14:08:21.399609 kubelet[2790]: I1213 14:08:21.399407 2790 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/77348368-5bd3-4f95-b97c-27347e1ae607-var-run-calico\") pod \"calico-node-bnx9h\" (UID: \"77348368-5bd3-4f95-b97c-27347e1ae607\") " pod="calico-system/calico-node-bnx9h" Dec 13 14:08:21.399609 kubelet[2790]: I1213 14:08:21.399428 2790 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/77348368-5bd3-4f95-b97c-27347e1ae607-cni-bin-dir\") pod \"calico-node-bnx9h\" (UID: \"77348368-5bd3-4f95-b97c-27347e1ae607\") " pod="calico-system/calico-node-bnx9h" Dec 13 14:08:21.399609 kubelet[2790]: I1213 14:08:21.399451 2790 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/77348368-5bd3-4f95-b97c-27347e1ae607-policysync\") pod \"calico-node-bnx9h\" (UID: \"77348368-5bd3-4f95-b97c-27347e1ae607\") " pod="calico-system/calico-node-bnx9h" Dec 13 14:08:21.399609 kubelet[2790]: I1213 14:08:21.399470 2790 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/77348368-5bd3-4f95-b97c-27347e1ae607-flexvol-driver-host\") pod \"calico-node-bnx9h\" (UID: \"77348368-5bd3-4f95-b97c-27347e1ae607\") " pod="calico-system/calico-node-bnx9h" Dec 13 14:08:21.399759 kubelet[2790]: I1213 14:08:21.399490 2790 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/77348368-5bd3-4f95-b97c-27347e1ae607-xtables-lock\") pod \"calico-node-bnx9h\" (UID: \"77348368-5bd3-4f95-b97c-27347e1ae607\") " pod="calico-system/calico-node-bnx9h" Dec 13 14:08:21.399759 kubelet[2790]: I1213 14:08:21.399512 2790 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77348368-5bd3-4f95-b97c-27347e1ae607-tigera-ca-bundle\") pod \"calico-node-bnx9h\" (UID: \"77348368-5bd3-4f95-b97c-27347e1ae607\") " pod="calico-system/calico-node-bnx9h" Dec 13 14:08:21.399759 kubelet[2790]: I1213 14:08:21.399539 2790 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/77348368-5bd3-4f95-b97c-27347e1ae607-node-certs\") pod \"calico-node-bnx9h\" (UID: \"77348368-5bd3-4f95-b97c-27347e1ae607\") " pod="calico-system/calico-node-bnx9h" Dec 13 14:08:21.399759 kubelet[2790]: I1213 14:08:21.399561 2790 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w87lr\" (UniqueName: \"kubernetes.io/projected/77348368-5bd3-4f95-b97c-27347e1ae607-kube-api-access-w87lr\") pod \"calico-node-bnx9h\" (UID: \"77348368-5bd3-4f95-b97c-27347e1ae607\") " pod="calico-system/calico-node-bnx9h" Dec 13 14:08:21.399759 kubelet[2790]: I1213 14:08:21.399582 2790 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/77348368-5bd3-4f95-b97c-27347e1ae607-cni-net-dir\") pod \"calico-node-bnx9h\" (UID: \"77348368-5bd3-4f95-b97c-27347e1ae607\") " pod="calico-system/calico-node-bnx9h" Dec 13 14:08:21.399866 kubelet[2790]: I1213 14:08:21.399626 2790 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/77348368-5bd3-4f95-b97c-27347e1ae607-cni-log-dir\") pod \"calico-node-bnx9h\" (UID: \"77348368-5bd3-4f95-b97c-27347e1ae607\") " pod="calico-system/calico-node-bnx9h" Dec 13 14:08:21.450681 env[1588]: time="2024-12-13T14:08:21.450265867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5c9bbbd468-lfbs8,Uid:1f6fbbd9-91a7-4310-aff8-ed4b3231fc57,Namespace:calico-system,Attempt:0,}" Dec 13 14:08:21.484839 env[1588]: time="2024-12-13T14:08:21.484755433Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:08:21.485037 env[1588]: time="2024-12-13T14:08:21.485012344Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:08:21.485177 env[1588]: time="2024-12-13T14:08:21.485141539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:08:21.485430 env[1588]: time="2024-12-13T14:08:21.485402530Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cef26961b1ad153db0061a43e45ab551e353b271c6f5052086e21201863b84ac pid=3160 runtime=io.containerd.runc.v2 Dec 13 14:08:21.500635 kubelet[2790]: I1213 14:08:21.500301 2790 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e35f131e-6a5b-4f9b-80ee-8f99f7186350-kubelet-dir\") pod \"csi-node-driver-9jgbk\" (UID: \"e35f131e-6a5b-4f9b-80ee-8f99f7186350\") " pod="calico-system/csi-node-driver-9jgbk" Dec 13 14:08:21.500635 kubelet[2790]: I1213 14:08:21.500373 2790 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/e35f131e-6a5b-4f9b-80ee-8f99f7186350-varrun\") pod \"csi-node-driver-9jgbk\" (UID: \"e35f131e-6a5b-4f9b-80ee-8f99f7186350\") " pod="calico-system/csi-node-driver-9jgbk" Dec 13 14:08:21.500635 kubelet[2790]: I1213 14:08:21.500476 2790 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx9g9\" (UniqueName: \"kubernetes.io/projected/e35f131e-6a5b-4f9b-80ee-8f99f7186350-kube-api-access-zx9g9\") pod \"csi-node-driver-9jgbk\" (UID: \"e35f131e-6a5b-4f9b-80ee-8f99f7186350\") " pod="calico-system/csi-node-driver-9jgbk" Dec 13 14:08:21.500635 kubelet[2790]: I1213 14:08:21.500519 2790 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e35f131e-6a5b-4f9b-80ee-8f99f7186350-socket-dir\") pod \"csi-node-driver-9jgbk\" (UID: \"e35f131e-6a5b-4f9b-80ee-8f99f7186350\") " pod="calico-system/csi-node-driver-9jgbk" Dec 13 14:08:21.500635 kubelet[2790]: I1213 14:08:21.500541 2790 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e35f131e-6a5b-4f9b-80ee-8f99f7186350-registration-dir\") pod \"csi-node-driver-9jgbk\" (UID: \"e35f131e-6a5b-4f9b-80ee-8f99f7186350\") " pod="calico-system/csi-node-driver-9jgbk" Dec 13 14:08:21.512864 kubelet[2790]: E1213 14:08:21.504642 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:21.512864 kubelet[2790]: W1213 14:08:21.504668 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:21.512864 kubelet[2790]: E1213 14:08:21.504690 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:21.514498 kubelet[2790]: E1213 14:08:21.513873 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:21.514498 kubelet[2790]: W1213 14:08:21.513892 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:21.514498 kubelet[2790]: E1213 14:08:21.513918 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:21.519487 kubelet[2790]: E1213 14:08:21.519462 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:21.519487 kubelet[2790]: W1213 14:08:21.519480 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:21.519631 kubelet[2790]: E1213 14:08:21.519503 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:21.525901 kubelet[2790]: E1213 14:08:21.522118 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:21.525901 kubelet[2790]: W1213 14:08:21.522146 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:21.525901 kubelet[2790]: E1213 14:08:21.522240 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:21.525901 kubelet[2790]: E1213 14:08:21.522838 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:21.525901 kubelet[2790]: W1213 14:08:21.522848 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:21.525901 kubelet[2790]: E1213 14:08:21.522932 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:21.525901 kubelet[2790]: E1213 14:08:21.523718 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:21.525901 kubelet[2790]: W1213 14:08:21.523727 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:21.525901 kubelet[2790]: E1213 14:08:21.523853 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:21.525901 kubelet[2790]: E1213 14:08:21.524223 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:21.526258 kubelet[2790]: W1213 14:08:21.524233 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:21.526258 kubelet[2790]: E1213 14:08:21.524333 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:21.526258 kubelet[2790]: E1213 14:08:21.524461 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:21.526258 kubelet[2790]: W1213 14:08:21.524468 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:21.526258 kubelet[2790]: E1213 14:08:21.524529 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:21.526258 kubelet[2790]: E1213 14:08:21.524636 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:21.526258 kubelet[2790]: W1213 14:08:21.524643 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:21.526258 kubelet[2790]: E1213 14:08:21.524708 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:21.526258 kubelet[2790]: E1213 14:08:21.524792 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:21.526258 kubelet[2790]: W1213 14:08:21.524799 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:21.526542 kubelet[2790]: E1213 14:08:21.524855 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:21.526542 kubelet[2790]: E1213 14:08:21.524926 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:21.526542 kubelet[2790]: W1213 14:08:21.524934 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:21.526542 kubelet[2790]: E1213 14:08:21.524986 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:21.526542 kubelet[2790]: E1213 14:08:21.525062 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:21.526542 kubelet[2790]: W1213 14:08:21.525068 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:21.526542 kubelet[2790]: E1213 14:08:21.525118 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:21.526542 kubelet[2790]: E1213 14:08:21.525189 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:21.526542 kubelet[2790]: W1213 14:08:21.525197 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:21.526542 kubelet[2790]: E1213 14:08:21.525208 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:21.526888 kubelet[2790]: E1213 14:08:21.525334 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:21.526888 kubelet[2790]: W1213 14:08:21.525341 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:21.526888 kubelet[2790]: E1213 14:08:21.525355 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:21.526888 kubelet[2790]: E1213 14:08:21.525532 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:21.526888 kubelet[2790]: W1213 14:08:21.525539 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:21.526888 kubelet[2790]: E1213 14:08:21.525549 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:21.526888 kubelet[2790]: E1213 14:08:21.525712 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:21.526888 kubelet[2790]: W1213 14:08:21.525719 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:21.526888 kubelet[2790]: E1213 14:08:21.525731 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:21.526888 kubelet[2790]: E1213 14:08:21.525853 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:21.527097 kubelet[2790]: W1213 14:08:21.525860 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:21.527097 kubelet[2790]: E1213 14:08:21.525869 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:21.529711 kubelet[2790]: E1213 14:08:21.527817 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:21.529711 kubelet[2790]: W1213 14:08:21.527836 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:21.529711 kubelet[2790]: E1213 14:08:21.527855 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:21.529711 kubelet[2790]: E1213 14:08:21.529213 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:21.529711 kubelet[2790]: W1213 14:08:21.529224 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:21.529711 kubelet[2790]: E1213 14:08:21.529257 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:21.529711 kubelet[2790]: E1213 14:08:21.529465 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:21.529711 kubelet[2790]: W1213 14:08:21.529474 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:21.529711 kubelet[2790]: E1213 14:08:21.529486 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:21.542395 env[1588]: time="2024-12-13T14:08:21.542356213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bnx9h,Uid:77348368-5bd3-4f95-b97c-27347e1ae607,Namespace:calico-system,Attempt:0,}" Dec 13 14:08:21.560275 env[1588]: time="2024-12-13T14:08:21.560204094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5c9bbbd468-lfbs8,Uid:1f6fbbd9-91a7-4310-aff8-ed4b3231fc57,Namespace:calico-system,Attempt:0,} returns sandbox id \"cef26961b1ad153db0061a43e45ab551e353b271c6f5052086e21201863b84ac\"" Dec 13 14:08:21.562656 env[1588]: time="2024-12-13T14:08:21.562035189Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Dec 13 14:08:21.581545 env[1588]: time="2024-12-13T14:08:21.581456694Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:08:21.581545 env[1588]: time="2024-12-13T14:08:21.581508452Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:08:21.581752 env[1588]: time="2024-12-13T14:08:21.581534692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:08:21.581841 env[1588]: time="2024-12-13T14:08:21.581781283Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/74ca0216c7e087ec913c2b73cbb9adf11f1c0ff466024f47e91cd4992ad3a2a7 pid=3223 runtime=io.containerd.runc.v2 Dec 13 14:08:21.604560 kubelet[2790]: E1213 14:08:21.604524 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:21.604560 kubelet[2790]: W1213 14:08:21.604550 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:21.604823 kubelet[2790]: E1213 14:08:21.604574 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:21.605288 kubelet[2790]: E1213 14:08:21.605045 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:21.605288 kubelet[2790]: W1213 14:08:21.605068 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:21.605288 kubelet[2790]: E1213 14:08:21.605083 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:21.605577 kubelet[2790]: E1213 14:08:21.605441 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:21.605577 kubelet[2790]: W1213 14:08:21.605457 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:21.605577 kubelet[2790]: E1213 14:08:21.605479 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:21.605910 kubelet[2790]: E1213 14:08:21.605796 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:21.605910 kubelet[2790]: W1213 14:08:21.605807 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:21.605910 kubelet[2790]: E1213 14:08:21.605834 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:21.606212 kubelet[2790]: E1213 14:08:21.606068 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:21.606212 kubelet[2790]: W1213 14:08:21.606080 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:21.606212 kubelet[2790]: E1213 14:08:21.606095 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:21.606470 kubelet[2790]: E1213 14:08:21.606380 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:21.606470 kubelet[2790]: W1213 14:08:21.606391 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:21.606470 kubelet[2790]: E1213 14:08:21.606424 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:21.608683 kubelet[2790]: E1213 14:08:21.608396 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:21.608683 kubelet[2790]: W1213 14:08:21.608414 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:21.608683 kubelet[2790]: E1213 14:08:21.608519 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:21.608992 kubelet[2790]: E1213 14:08:21.608870 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:21.608992 kubelet[2790]: W1213 14:08:21.608884 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:21.608992 kubelet[2790]: E1213 14:08:21.608903 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:21.611946 kubelet[2790]: E1213 14:08:21.611799 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:21.611946 kubelet[2790]: W1213 14:08:21.611821 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:21.611946 kubelet[2790]: E1213 14:08:21.611847 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:21.615003 kubelet[2790]: E1213 14:08:21.614903 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:21.615003 kubelet[2790]: W1213 14:08:21.614921 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:21.618094 kubelet[2790]: E1213 14:08:21.617961 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:21.618094 kubelet[2790]: W1213 14:08:21.617981 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:21.619120 kubelet[2790]: E1213 14:08:21.618422 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:21.619120 kubelet[2790]: W1213 14:08:21.618434 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:21.619120 kubelet[2790]: E1213 14:08:21.618482 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:21.619120 kubelet[2790]: E1213 14:08:21.618998 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:21.619120 kubelet[2790]: E1213 14:08:21.619072 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:21.619622 kubelet[2790]: E1213 14:08:21.619481 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:21.619622 kubelet[2790]: W1213 14:08:21.619494 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:21.619622 kubelet[2790]: E1213 14:08:21.619512 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:21.619970 kubelet[2790]: E1213 14:08:21.619807 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:21.619970 kubelet[2790]: W1213 14:08:21.619818 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:21.619970 kubelet[2790]: E1213 14:08:21.619833 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:21.620479 kubelet[2790]: E1213 14:08:21.620155 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:21.620479 kubelet[2790]: W1213 14:08:21.620167 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:21.620479 kubelet[2790]: E1213 14:08:21.620253 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:21.621108 kubelet[2790]: E1213 14:08:21.621094 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:21.621209 kubelet[2790]: W1213 14:08:21.621195 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:21.621369 kubelet[2790]: E1213 14:08:21.621357 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:21.621558 kubelet[2790]: E1213 14:08:21.621549 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:21.621679 kubelet[2790]: W1213 14:08:21.621665 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:21.621835 kubelet[2790]: E1213 14:08:21.621823 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:21.622130 kubelet[2790]: E1213 14:08:21.622117 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:21.622241 kubelet[2790]: W1213 14:08:21.622229 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:21.622405 kubelet[2790]: E1213 14:08:21.622393 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:21.622569 kubelet[2790]: E1213 14:08:21.622559 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:21.622702 kubelet[2790]: W1213 14:08:21.622690 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:21.622768 kubelet[2790]: E1213 14:08:21.622759 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:21.623332 kubelet[2790]: E1213 14:08:21.623317 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:21.623415 kubelet[2790]: W1213 14:08:21.623403 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:21.623474 kubelet[2790]: E1213 14:08:21.623465 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:21.624694 kubelet[2790]: E1213 14:08:21.624671 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:21.624797 kubelet[2790]: W1213 14:08:21.624783 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:21.624867 kubelet[2790]: E1213 14:08:21.624854 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:21.625118 kubelet[2790]: E1213 14:08:21.625105 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:21.625206 kubelet[2790]: W1213 14:08:21.625193 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:21.625346 kubelet[2790]: E1213 14:08:21.625335 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:21.625579 kubelet[2790]: E1213 14:08:21.625569 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:21.625698 kubelet[2790]: W1213 14:08:21.625684 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:21.625762 kubelet[2790]: E1213 14:08:21.625753 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:21.626387 kubelet[2790]: E1213 14:08:21.626372 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:21.626509 kubelet[2790]: W1213 14:08:21.626496 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:21.626584 kubelet[2790]: E1213 14:08:21.626575 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:21.626909 kubelet[2790]: E1213 14:08:21.626896 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:21.626997 kubelet[2790]: W1213 14:08:21.626985 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:21.627055 kubelet[2790]: E1213 14:08:21.627046 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:21.627705 kubelet[2790]: E1213 14:08:21.627691 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:21.628261 kubelet[2790]: W1213 14:08:21.628246 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:21.628368 kubelet[2790]: E1213 14:08:21.628357 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:21.638247 env[1588]: time="2024-12-13T14:08:21.638195425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bnx9h,Uid:77348368-5bd3-4f95-b97c-27347e1ae607,Namespace:calico-system,Attempt:0,} returns sandbox id \"74ca0216c7e087ec913c2b73cbb9adf11f1c0ff466024f47e91cd4992ad3a2a7\"" Dec 13 14:08:21.920000 audit[3284]: NETFILTER_CFG table=filter:96 family=2 entries=17 op=nft_register_rule pid=3284 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:08:21.920000 audit[3284]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6652 a0=3 a1=ffffe44f8410 a2=0 a3=1 items=0 ppid=2930 pid=3284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:21.920000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:08:21.925000 audit[3284]: NETFILTER_CFG table=nat:97 family=2 entries=12 op=nft_register_rule pid=3284 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:08:21.925000 audit[3284]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe44f8410 a2=0 a3=1 items=0 ppid=2930 pid=3284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:21.925000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:08:22.669973 kubelet[2790]: E1213 14:08:22.669935 2790 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9jgbk" podUID="e35f131e-6a5b-4f9b-80ee-8f99f7186350" Dec 13 14:08:23.107803 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2009741704.mount: Deactivated successfully. Dec 13 14:08:23.644998 env[1588]: time="2024-12-13T14:08:23.644946962Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:08:23.649620 env[1588]: time="2024-12-13T14:08:23.649548802Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:08:23.654457 env[1588]: time="2024-12-13T14:08:23.654419953Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:08:23.657697 env[1588]: time="2024-12-13T14:08:23.657660400Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:08:23.658211 env[1588]: time="2024-12-13T14:08:23.658182262Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Dec 13 14:08:23.661909 env[1588]: time="2024-12-13T14:08:23.661865494Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 14:08:23.675341 env[1588]: time="2024-12-13T14:08:23.675295067Z" level=info msg="CreateContainer within sandbox \"cef26961b1ad153db0061a43e45ab551e353b271c6f5052086e21201863b84ac\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 14:08:23.699623 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount895943944.mount: Deactivated successfully. Dec 13 14:08:23.717419 env[1588]: time="2024-12-13T14:08:23.717364564Z" level=info msg="CreateContainer within sandbox \"cef26961b1ad153db0061a43e45ab551e353b271c6f5052086e21201863b84ac\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"94bd3deee03ea21c00713d24dd5d9ba6bcca44331fb28c35d8881fa4c2dd62f3\"" Dec 13 14:08:23.719280 env[1588]: time="2024-12-13T14:08:23.718170776Z" level=info msg="StartContainer for \"94bd3deee03ea21c00713d24dd5d9ba6bcca44331fb28c35d8881fa4c2dd62f3\"" Dec 13 14:08:23.777139 env[1588]: time="2024-12-13T14:08:23.777092088Z" level=info msg="StartContainer for \"94bd3deee03ea21c00713d24dd5d9ba6bcca44331fb28c35d8881fa4c2dd62f3\" returns successfully" Dec 13 14:08:23.812369 kubelet[2790]: I1213 14:08:23.811984 2790 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-5c9bbbd468-lfbs8" podStartSLOduration=0.715109267 podStartE2EDuration="2.811942996s" podCreationTimestamp="2024-12-13 14:08:21 +0000 UTC" firstStartedPulling="2024-12-13 14:08:21.561781118 +0000 UTC m=+23.030917502" lastFinishedPulling="2024-12-13 14:08:23.658614767 +0000 UTC m=+25.127751231" observedRunningTime="2024-12-13 14:08:23.811681525 +0000 UTC m=+25.280817909" watchObservedRunningTime="2024-12-13 14:08:23.811942996 +0000 UTC m=+25.281079380" Dec 13 14:08:23.834952 kubelet[2790]: E1213 14:08:23.834910 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:23.834952 kubelet[2790]: W1213 14:08:23.834936 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:23.835121 kubelet[2790]: E1213 14:08:23.834967 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:23.835312 kubelet[2790]: E1213 14:08:23.835287 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:23.835312 kubelet[2790]: W1213 14:08:23.835303 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:23.835395 kubelet[2790]: E1213 14:08:23.835317 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:23.835605 kubelet[2790]: E1213 14:08:23.835578 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:23.835653 kubelet[2790]: W1213 14:08:23.835592 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:23.835653 kubelet[2790]: E1213 14:08:23.835621 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:23.836130 kubelet[2790]: E1213 14:08:23.836109 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:23.836202 kubelet[2790]: W1213 14:08:23.836125 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:23.836202 kubelet[2790]: E1213 14:08:23.836158 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:23.836398 kubelet[2790]: E1213 14:08:23.836373 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:23.836398 kubelet[2790]: W1213 14:08:23.836392 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:23.836398 kubelet[2790]: E1213 14:08:23.836405 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:23.836646 kubelet[2790]: E1213 14:08:23.836628 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:23.836646 kubelet[2790]: W1213 14:08:23.836642 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:23.836727 kubelet[2790]: E1213 14:08:23.836654 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:23.836892 kubelet[2790]: E1213 14:08:23.836876 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:23.836892 kubelet[2790]: W1213 14:08:23.836889 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:23.836990 kubelet[2790]: E1213 14:08:23.836899 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:23.837138 kubelet[2790]: E1213 14:08:23.837116 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:23.837138 kubelet[2790]: W1213 14:08:23.837135 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:23.837230 kubelet[2790]: E1213 14:08:23.837150 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:23.837403 kubelet[2790]: E1213 14:08:23.837387 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:23.837403 kubelet[2790]: W1213 14:08:23.837400 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:23.837488 kubelet[2790]: E1213 14:08:23.837411 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:23.837649 kubelet[2790]: E1213 14:08:23.837632 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:23.837649 kubelet[2790]: W1213 14:08:23.837645 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:23.837729 kubelet[2790]: E1213 14:08:23.837657 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:23.837909 kubelet[2790]: E1213 14:08:23.837892 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:23.837909 kubelet[2790]: W1213 14:08:23.837906 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:23.838001 kubelet[2790]: E1213 14:08:23.837918 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:23.838156 kubelet[2790]: E1213 14:08:23.838139 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:23.838156 kubelet[2790]: W1213 14:08:23.838152 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:23.838243 kubelet[2790]: E1213 14:08:23.838164 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:23.838423 kubelet[2790]: E1213 14:08:23.838402 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:23.838423 kubelet[2790]: W1213 14:08:23.838418 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:23.838423 kubelet[2790]: E1213 14:08:23.838432 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:23.838672 kubelet[2790]: E1213 14:08:23.838652 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:23.838672 kubelet[2790]: W1213 14:08:23.838666 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:23.838767 kubelet[2790]: E1213 14:08:23.838679 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:23.838916 kubelet[2790]: E1213 14:08:23.838895 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:23.838916 kubelet[2790]: W1213 14:08:23.838910 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:23.839004 kubelet[2790]: E1213 14:08:23.838923 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:23.839177 kubelet[2790]: E1213 14:08:23.839158 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:23.839177 kubelet[2790]: W1213 14:08:23.839170 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:23.839261 kubelet[2790]: E1213 14:08:23.839181 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:23.839467 kubelet[2790]: E1213 14:08:23.839449 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:23.839467 kubelet[2790]: W1213 14:08:23.839462 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:23.839569 kubelet[2790]: E1213 14:08:23.839479 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:23.839791 kubelet[2790]: E1213 14:08:23.839765 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:23.839791 kubelet[2790]: W1213 14:08:23.839786 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:23.839883 kubelet[2790]: E1213 14:08:23.839805 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:23.840063 kubelet[2790]: E1213 14:08:23.840041 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:23.840063 kubelet[2790]: W1213 14:08:23.840055 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:23.840063 kubelet[2790]: E1213 14:08:23.840073 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:23.840311 kubelet[2790]: E1213 14:08:23.840294 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:23.840311 kubelet[2790]: W1213 14:08:23.840308 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:23.840406 kubelet[2790]: E1213 14:08:23.840327 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:23.840580 kubelet[2790]: E1213 14:08:23.840558 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:23.840580 kubelet[2790]: W1213 14:08:23.840572 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:23.840712 kubelet[2790]: E1213 14:08:23.840652 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:23.840894 kubelet[2790]: E1213 14:08:23.840876 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:23.840956 kubelet[2790]: W1213 14:08:23.840900 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:23.841070 kubelet[2790]: E1213 14:08:23.841018 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:23.841130 kubelet[2790]: E1213 14:08:23.841110 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:23.841130 kubelet[2790]: W1213 14:08:23.841126 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:23.841325 kubelet[2790]: E1213 14:08:23.841216 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:23.841380 kubelet[2790]: E1213 14:08:23.841347 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:23.841380 kubelet[2790]: W1213 14:08:23.841355 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:23.841380 kubelet[2790]: E1213 14:08:23.841369 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:23.841702 kubelet[2790]: E1213 14:08:23.841681 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:23.841702 kubelet[2790]: W1213 14:08:23.841694 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:23.841814 kubelet[2790]: E1213 14:08:23.841712 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:23.842154 kubelet[2790]: E1213 14:08:23.842021 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:23.842154 kubelet[2790]: W1213 14:08:23.842034 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:23.842154 kubelet[2790]: E1213 14:08:23.842062 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:23.842580 kubelet[2790]: E1213 14:08:23.842326 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:23.842580 kubelet[2790]: W1213 14:08:23.842339 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:23.842580 kubelet[2790]: E1213 14:08:23.842360 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:23.842776 kubelet[2790]: E1213 14:08:23.842757 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:23.842776 kubelet[2790]: W1213 14:08:23.842773 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:23.842854 kubelet[2790]: E1213 14:08:23.842792 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:23.843023 kubelet[2790]: E1213 14:08:23.842991 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:23.843023 kubelet[2790]: W1213 14:08:23.843009 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:23.843245 kubelet[2790]: E1213 14:08:23.843129 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:23.843357 kubelet[2790]: E1213 14:08:23.843341 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:23.843357 kubelet[2790]: W1213 14:08:23.843351 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:23.843431 kubelet[2790]: E1213 14:08:23.843414 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:23.843585 kubelet[2790]: E1213 14:08:23.843568 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:23.843688 kubelet[2790]: W1213 14:08:23.843659 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:23.843688 kubelet[2790]: E1213 14:08:23.843676 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:23.844191 kubelet[2790]: E1213 14:08:23.843870 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:23.844191 kubelet[2790]: W1213 14:08:23.843884 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:23.844191 kubelet[2790]: E1213 14:08:23.843897 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:23.844498 kubelet[2790]: E1213 14:08:23.844472 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:23.844498 kubelet[2790]: W1213 14:08:23.844489 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:23.844498 kubelet[2790]: E1213 14:08:23.844502 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:24.669230 kubelet[2790]: E1213 14:08:24.668907 2790 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9jgbk" podUID="e35f131e-6a5b-4f9b-80ee-8f99f7186350" Dec 13 14:08:24.798532 kubelet[2790]: I1213 14:08:24.797910 2790 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 14:08:24.843427 kubelet[2790]: E1213 14:08:24.843303 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:24.843427 kubelet[2790]: W1213 14:08:24.843326 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:24.843427 kubelet[2790]: E1213 14:08:24.843349 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:24.844037 kubelet[2790]: E1213 14:08:24.843912 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:24.844037 kubelet[2790]: W1213 14:08:24.843926 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:24.844037 kubelet[2790]: E1213 14:08:24.843941 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:24.844452 kubelet[2790]: E1213 14:08:24.844279 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:24.844452 kubelet[2790]: W1213 14:08:24.844291 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:24.844452 kubelet[2790]: E1213 14:08:24.844304 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:24.844727 kubelet[2790]: E1213 14:08:24.844590 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:24.844727 kubelet[2790]: W1213 14:08:24.844618 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:24.844727 kubelet[2790]: E1213 14:08:24.844632 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:24.845037 kubelet[2790]: E1213 14:08:24.844899 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:24.845037 kubelet[2790]: W1213 14:08:24.844910 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:24.845037 kubelet[2790]: E1213 14:08:24.844924 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:24.845249 kubelet[2790]: E1213 14:08:24.845174 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:24.845249 kubelet[2790]: W1213 14:08:24.845185 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:24.845407 kubelet[2790]: E1213 14:08:24.845321 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:24.845612 kubelet[2790]: E1213 14:08:24.845506 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:24.845612 kubelet[2790]: W1213 14:08:24.845517 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:24.845612 kubelet[2790]: E1213 14:08:24.845528 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:24.845884 kubelet[2790]: E1213 14:08:24.845763 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:24.845884 kubelet[2790]: W1213 14:08:24.845773 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:24.845884 kubelet[2790]: E1213 14:08:24.845785 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:24.846163 kubelet[2790]: E1213 14:08:24.846069 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:24.846163 kubelet[2790]: W1213 14:08:24.846080 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:24.846163 kubelet[2790]: E1213 14:08:24.846094 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:24.846393 kubelet[2790]: E1213 14:08:24.846306 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:24.846393 kubelet[2790]: W1213 14:08:24.846315 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:24.846393 kubelet[2790]: E1213 14:08:24.846326 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:24.846529 kubelet[2790]: E1213 14:08:24.846519 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:24.846588 kubelet[2790]: W1213 14:08:24.846578 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:24.846733 kubelet[2790]: E1213 14:08:24.846660 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:24.846850 kubelet[2790]: E1213 14:08:24.846840 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:24.846906 kubelet[2790]: W1213 14:08:24.846896 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:24.846964 kubelet[2790]: E1213 14:08:24.846956 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:24.847209 kubelet[2790]: E1213 14:08:24.847198 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:24.847287 kubelet[2790]: W1213 14:08:24.847276 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:24.847344 kubelet[2790]: E1213 14:08:24.847335 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:24.847548 kubelet[2790]: E1213 14:08:24.847537 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:24.847636 kubelet[2790]: W1213 14:08:24.847625 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:24.847695 kubelet[2790]: E1213 14:08:24.847686 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:24.847912 kubelet[2790]: E1213 14:08:24.847901 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:24.847996 kubelet[2790]: W1213 14:08:24.847984 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:24.848050 kubelet[2790]: E1213 14:08:24.848041 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:24.848361 kubelet[2790]: E1213 14:08:24.848350 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:24.848443 kubelet[2790]: W1213 14:08:24.848431 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:24.848502 kubelet[2790]: E1213 14:08:24.848493 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:24.848883 kubelet[2790]: E1213 14:08:24.848861 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:24.848977 kubelet[2790]: W1213 14:08:24.848966 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:24.849042 kubelet[2790]: E1213 14:08:24.849033 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:24.849269 kubelet[2790]: E1213 14:08:24.849251 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:24.849269 kubelet[2790]: W1213 14:08:24.849268 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:24.849346 kubelet[2790]: E1213 14:08:24.849286 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:24.849564 kubelet[2790]: E1213 14:08:24.849539 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:24.849564 kubelet[2790]: W1213 14:08:24.849553 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:24.849778 kubelet[2790]: E1213 14:08:24.849588 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:24.849805 kubelet[2790]: E1213 14:08:24.849783 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:24.849805 kubelet[2790]: W1213 14:08:24.849794 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:24.849859 kubelet[2790]: E1213 14:08:24.849807 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:24.850076 kubelet[2790]: E1213 14:08:24.850057 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:24.850076 kubelet[2790]: W1213 14:08:24.850075 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:24.850161 kubelet[2790]: E1213 14:08:24.850094 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:24.850436 kubelet[2790]: E1213 14:08:24.850419 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:24.850436 kubelet[2790]: W1213 14:08:24.850434 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:24.850571 kubelet[2790]: E1213 14:08:24.850558 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:24.850721 kubelet[2790]: E1213 14:08:24.850576 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:24.850837 kubelet[2790]: W1213 14:08:24.850821 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:24.850943 kubelet[2790]: E1213 14:08:24.850923 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:24.854144 kubelet[2790]: E1213 14:08:24.852475 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:24.854404 kubelet[2790]: W1213 14:08:24.854373 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:24.855119 env[1588]: time="2024-12-13T14:08:24.854669115Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:08:24.855444 kubelet[2790]: E1213 14:08:24.855424 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:24.855789 kubelet[2790]: E1213 14:08:24.855770 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:24.855789 kubelet[2790]: W1213 14:08:24.855788 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:24.855888 kubelet[2790]: E1213 14:08:24.855810 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:24.855966 kubelet[2790]: E1213 14:08:24.855948 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:24.855966 kubelet[2790]: W1213 14:08:24.855961 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:24.856038 kubelet[2790]: E1213 14:08:24.855985 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:24.856198 kubelet[2790]: E1213 14:08:24.856183 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:24.856198 kubelet[2790]: W1213 14:08:24.856196 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:24.856198 kubelet[2790]: E1213 14:08:24.856211 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:24.856419 kubelet[2790]: E1213 14:08:24.856403 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:24.856419 kubelet[2790]: W1213 14:08:24.856418 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:24.856479 kubelet[2790]: E1213 14:08:24.856434 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:24.856781 kubelet[2790]: E1213 14:08:24.856766 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:24.856856 kubelet[2790]: W1213 14:08:24.856843 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:24.856936 kubelet[2790]: E1213 14:08:24.856926 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:24.857168 kubelet[2790]: E1213 14:08:24.857156 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:24.857244 kubelet[2790]: W1213 14:08:24.857232 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:24.857314 kubelet[2790]: E1213 14:08:24.857305 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:24.857559 kubelet[2790]: E1213 14:08:24.857542 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:24.857559 kubelet[2790]: W1213 14:08:24.857557 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:24.857679 kubelet[2790]: E1213 14:08:24.857575 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:24.857781 kubelet[2790]: E1213 14:08:24.857766 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:24.857781 kubelet[2790]: W1213 14:08:24.857779 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:24.857840 kubelet[2790]: E1213 14:08:24.857791 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:24.858520 kubelet[2790]: E1213 14:08:24.858499 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:08:24.858562 kubelet[2790]: W1213 14:08:24.858519 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:08:24.858562 kubelet[2790]: E1213 14:08:24.858536 2790 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:08:24.861266 env[1588]: time="2024-12-13T14:08:24.861228850Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:08:24.865262 env[1588]: time="2024-12-13T14:08:24.865224153Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:08:24.869199 env[1588]: time="2024-12-13T14:08:24.869149698Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:08:24.869776 env[1588]: time="2024-12-13T14:08:24.869745798Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Dec 13 14:08:24.873534 env[1588]: time="2024-12-13T14:08:24.872574181Z" level=info msg="CreateContainer within sandbox \"74ca0216c7e087ec913c2b73cbb9adf11f1c0ff466024f47e91cd4992ad3a2a7\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 14:08:24.902397 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1604637084.mount: Deactivated successfully. Dec 13 14:08:24.917775 env[1588]: time="2024-12-13T14:08:24.917727753Z" level=info msg="CreateContainer within sandbox \"74ca0216c7e087ec913c2b73cbb9adf11f1c0ff466024f47e91cd4992ad3a2a7\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9391ebf1885e45ef42f50b7c070ae476a06359e3de8847d28f5133d36479b4f9\"" Dec 13 14:08:24.922589 env[1588]: time="2024-12-13T14:08:24.920230347Z" level=info msg="StartContainer for \"9391ebf1885e45ef42f50b7c070ae476a06359e3de8847d28f5133d36479b4f9\"" Dec 13 14:08:24.994440 env[1588]: time="2024-12-13T14:08:24.994389804Z" level=info msg="StartContainer for \"9391ebf1885e45ef42f50b7c070ae476a06359e3de8847d28f5133d36479b4f9\" returns successfully" Dec 13 14:08:25.665629 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9391ebf1885e45ef42f50b7c070ae476a06359e3de8847d28f5133d36479b4f9-rootfs.mount: Deactivated successfully. Dec 13 14:08:26.004439 env[1588]: time="2024-12-13T14:08:26.004367123Z" level=info msg="shim disconnected" id=9391ebf1885e45ef42f50b7c070ae476a06359e3de8847d28f5133d36479b4f9 Dec 13 14:08:26.004439 env[1588]: time="2024-12-13T14:08:26.004425721Z" level=warning msg="cleaning up after shim disconnected" id=9391ebf1885e45ef42f50b7c070ae476a06359e3de8847d28f5133d36479b4f9 namespace=k8s.io Dec 13 14:08:26.004439 env[1588]: time="2024-12-13T14:08:26.004441560Z" level=info msg="cleaning up dead shim" Dec 13 14:08:26.013711 env[1588]: time="2024-12-13T14:08:26.013648373Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:08:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3450 runtime=io.containerd.runc.v2\n" Dec 13 14:08:26.669918 kubelet[2790]: E1213 14:08:26.669834 2790 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9jgbk" podUID="e35f131e-6a5b-4f9b-80ee-8f99f7186350" Dec 13 14:08:26.807904 env[1588]: time="2024-12-13T14:08:26.807829709Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 14:08:28.668435 kubelet[2790]: E1213 14:08:28.668392 2790 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9jgbk" podUID="e35f131e-6a5b-4f9b-80ee-8f99f7186350" Dec 13 14:08:30.314640 env[1588]: time="2024-12-13T14:08:30.314573077Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:08:30.321206 env[1588]: time="2024-12-13T14:08:30.321157869Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:08:30.324336 env[1588]: time="2024-12-13T14:08:30.324292649Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:08:30.329949 env[1588]: time="2024-12-13T14:08:30.329911511Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:08:30.330758 env[1588]: time="2024-12-13T14:08:30.330729365Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Dec 13 14:08:30.334718 env[1588]: time="2024-12-13T14:08:30.334684680Z" level=info msg="CreateContainer within sandbox \"74ca0216c7e087ec913c2b73cbb9adf11f1c0ff466024f47e91cd4992ad3a2a7\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 14:08:30.374319 env[1588]: time="2024-12-13T14:08:30.374257865Z" level=info msg="CreateContainer within sandbox \"74ca0216c7e087ec913c2b73cbb9adf11f1c0ff466024f47e91cd4992ad3a2a7\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"169f36311d0084ac591304c352c3a609590114f8b1188b2ee2483caba886e7b7\"" Dec 13 14:08:30.378106 env[1588]: time="2024-12-13T14:08:30.378071024Z" level=info msg="StartContainer for \"169f36311d0084ac591304c352c3a609590114f8b1188b2ee2483caba886e7b7\"" Dec 13 14:08:30.438472 env[1588]: time="2024-12-13T14:08:30.438403111Z" level=info msg="StartContainer for \"169f36311d0084ac591304c352c3a609590114f8b1188b2ee2483caba886e7b7\" returns successfully" Dec 13 14:08:30.669195 kubelet[2790]: E1213 14:08:30.669096 2790 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9jgbk" podUID="e35f131e-6a5b-4f9b-80ee-8f99f7186350" Dec 13 14:08:31.530347 env[1588]: time="2024-12-13T14:08:31.530280568Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:08:31.554235 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-169f36311d0084ac591304c352c3a609590114f8b1188b2ee2483caba886e7b7-rootfs.mount: Deactivated successfully. Dec 13 14:08:31.606287 kubelet[2790]: I1213 14:08:31.606253 2790 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 14:08:31.640925 kubelet[2790]: I1213 14:08:31.640878 2790 topology_manager.go:215] "Topology Admit Handler" podUID="63d4a6f1-19ab-4534-9a5d-579c3598a6da" podNamespace="kube-system" podName="coredns-76f75df574-68bh6" Dec 13 14:08:31.650748 kubelet[2790]: W1213 14:08:31.650714 2790 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-3510.3.6-a-c740448bc5" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.6-a-c740448bc5' and this object Dec 13 14:08:31.650951 kubelet[2790]: E1213 14:08:31.650939 2790 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-3510.3.6-a-c740448bc5" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.6-a-c740448bc5' and this object Dec 13 14:08:31.653017 kubelet[2790]: I1213 14:08:31.652982 2790 topology_manager.go:215] "Topology Admit Handler" podUID="2200e8b6-ef61-4e96-abba-b05c84f6a27d" podNamespace="kube-system" podName="coredns-76f75df574-gnqj2" Dec 13 14:08:31.656418 kubelet[2790]: I1213 14:08:31.656376 2790 topology_manager.go:215] "Topology Admit Handler" podUID="d5db41ac-60b1-4aef-8370-a52f5b42bc29" podNamespace="calico-system" podName="calico-kube-controllers-559747f56b-6lsgx" Dec 13 14:08:31.656562 kubelet[2790]: I1213 14:08:31.656539 2790 topology_manager.go:215] "Topology Admit Handler" podUID="4c011aa9-d666-4206-8289-8d5531610d0f" podNamespace="calico-apiserver" podName="calico-apiserver-6777fb8766-ckq89" Dec 13 14:08:31.662954 kubelet[2790]: I1213 14:08:31.662589 2790 topology_manager.go:215] "Topology Admit Handler" podUID="26820ebc-4ace-4cbe-bb2e-a9e912553e07" podNamespace="calico-apiserver" podName="calico-apiserver-6777fb8766-cqmhg" Dec 13 14:08:31.695798 kubelet[2790]: I1213 14:08:31.695760 2790 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4c011aa9-d666-4206-8289-8d5531610d0f-calico-apiserver-certs\") pod \"calico-apiserver-6777fb8766-ckq89\" (UID: \"4c011aa9-d666-4206-8289-8d5531610d0f\") " pod="calico-apiserver/calico-apiserver-6777fb8766-ckq89" Dec 13 14:08:31.695798 kubelet[2790]: I1213 14:08:31.695805 2790 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrnn2\" (UniqueName: \"kubernetes.io/projected/4c011aa9-d666-4206-8289-8d5531610d0f-kube-api-access-jrnn2\") pod \"calico-apiserver-6777fb8766-ckq89\" (UID: \"4c011aa9-d666-4206-8289-8d5531610d0f\") " pod="calico-apiserver/calico-apiserver-6777fb8766-ckq89" Dec 13 14:08:31.696203 kubelet[2790]: I1213 14:08:31.695841 2790 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjk27\" (UniqueName: \"kubernetes.io/projected/d5db41ac-60b1-4aef-8370-a52f5b42bc29-kube-api-access-mjk27\") pod \"calico-kube-controllers-559747f56b-6lsgx\" (UID: \"d5db41ac-60b1-4aef-8370-a52f5b42bc29\") " pod="calico-system/calico-kube-controllers-559747f56b-6lsgx" Dec 13 14:08:31.696203 kubelet[2790]: I1213 14:08:31.695870 2790 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2200e8b6-ef61-4e96-abba-b05c84f6a27d-config-volume\") pod \"coredns-76f75df574-gnqj2\" (UID: \"2200e8b6-ef61-4e96-abba-b05c84f6a27d\") " pod="kube-system/coredns-76f75df574-gnqj2" Dec 13 14:08:31.696203 kubelet[2790]: I1213 14:08:31.695890 2790 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63d4a6f1-19ab-4534-9a5d-579c3598a6da-config-volume\") pod \"coredns-76f75df574-68bh6\" (UID: \"63d4a6f1-19ab-4534-9a5d-579c3598a6da\") " pod="kube-system/coredns-76f75df574-68bh6" Dec 13 14:08:31.696203 kubelet[2790]: I1213 14:08:31.695922 2790 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x9kz\" (UniqueName: \"kubernetes.io/projected/63d4a6f1-19ab-4534-9a5d-579c3598a6da-kube-api-access-8x9kz\") pod \"coredns-76f75df574-68bh6\" (UID: \"63d4a6f1-19ab-4534-9a5d-579c3598a6da\") " pod="kube-system/coredns-76f75df574-68bh6" Dec 13 14:08:31.696203 kubelet[2790]: I1213 14:08:31.695943 2790 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5db41ac-60b1-4aef-8370-a52f5b42bc29-tigera-ca-bundle\") pod \"calico-kube-controllers-559747f56b-6lsgx\" (UID: \"d5db41ac-60b1-4aef-8370-a52f5b42bc29\") " pod="calico-system/calico-kube-controllers-559747f56b-6lsgx" Dec 13 14:08:31.696352 kubelet[2790]: I1213 14:08:31.695965 2790 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-847wf\" (UniqueName: \"kubernetes.io/projected/26820ebc-4ace-4cbe-bb2e-a9e912553e07-kube-api-access-847wf\") pod \"calico-apiserver-6777fb8766-cqmhg\" (UID: \"26820ebc-4ace-4cbe-bb2e-a9e912553e07\") " pod="calico-apiserver/calico-apiserver-6777fb8766-cqmhg" Dec 13 14:08:31.696352 kubelet[2790]: I1213 14:08:31.695998 2790 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqsxz\" (UniqueName: \"kubernetes.io/projected/2200e8b6-ef61-4e96-abba-b05c84f6a27d-kube-api-access-zqsxz\") pod \"coredns-76f75df574-gnqj2\" (UID: \"2200e8b6-ef61-4e96-abba-b05c84f6a27d\") " pod="kube-system/coredns-76f75df574-gnqj2" Dec 13 14:08:31.696352 kubelet[2790]: I1213 14:08:31.696025 2790 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/26820ebc-4ace-4cbe-bb2e-a9e912553e07-calico-apiserver-certs\") pod \"calico-apiserver-6777fb8766-cqmhg\" (UID: \"26820ebc-4ace-4cbe-bb2e-a9e912553e07\") " pod="calico-apiserver/calico-apiserver-6777fb8766-cqmhg" Dec 13 14:08:32.697569 env[1588]: time="2024-12-13T14:08:32.694671228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-559747f56b-6lsgx,Uid:d5db41ac-60b1-4aef-8370-a52f5b42bc29,Namespace:calico-system,Attempt:0,}" Dec 13 14:08:32.697569 env[1588]: time="2024-12-13T14:08:32.695228211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6777fb8766-ckq89,Uid:4c011aa9-d666-4206-8289-8d5531610d0f,Namespace:calico-apiserver,Attempt:0,}" Dec 13 14:08:32.697569 env[1588]: time="2024-12-13T14:08:32.695728035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9jgbk,Uid:e35f131e-6a5b-4f9b-80ee-8f99f7186350,Namespace:calico-system,Attempt:0,}" Dec 13 14:08:32.697569 env[1588]: time="2024-12-13T14:08:32.695990547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6777fb8766-cqmhg,Uid:26820ebc-4ace-4cbe-bb2e-a9e912553e07,Namespace:calico-apiserver,Attempt:0,}" Dec 13 14:08:32.753759 env[1588]: time="2024-12-13T14:08:32.753681281Z" level=info msg="shim disconnected" id=169f36311d0084ac591304c352c3a609590114f8b1188b2ee2483caba886e7b7 Dec 13 14:08:32.753759 env[1588]: time="2024-12-13T14:08:32.753744599Z" level=warning msg="cleaning up after shim disconnected" id=169f36311d0084ac591304c352c3a609590114f8b1188b2ee2483caba886e7b7 namespace=k8s.io Dec 13 14:08:32.753759 env[1588]: time="2024-12-13T14:08:32.753755079Z" level=info msg="cleaning up dead shim" Dec 13 14:08:32.761978 env[1588]: time="2024-12-13T14:08:32.761923306Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:08:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3518 runtime=io.containerd.runc.v2\n" Dec 13 14:08:32.840530 env[1588]: time="2024-12-13T14:08:32.840494074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 14:08:32.844963 env[1588]: time="2024-12-13T14:08:32.844905217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-68bh6,Uid:63d4a6f1-19ab-4534-9a5d-579c3598a6da,Namespace:kube-system,Attempt:0,}" Dec 13 14:08:32.863880 env[1588]: time="2024-12-13T14:08:32.863832391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-gnqj2,Uid:2200e8b6-ef61-4e96-abba-b05c84f6a27d,Namespace:kube-system,Attempt:0,}" Dec 13 14:08:32.988781 env[1588]: time="2024-12-13T14:08:32.988703886Z" level=error msg="Failed to destroy network for sandbox \"2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:08:32.989409 env[1588]: time="2024-12-13T14:08:32.989359705Z" level=error msg="encountered an error cleaning up failed sandbox \"2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:08:32.989480 env[1588]: time="2024-12-13T14:08:32.989415704Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-559747f56b-6lsgx,Uid:d5db41ac-60b1-4aef-8370-a52f5b42bc29,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:08:32.989847 kubelet[2790]: E1213 14:08:32.989813 2790 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:08:32.990155 kubelet[2790]: E1213 14:08:32.989876 2790 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-559747f56b-6lsgx" Dec 13 14:08:32.990155 kubelet[2790]: E1213 14:08:32.989898 2790 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-559747f56b-6lsgx" Dec 13 14:08:32.990155 kubelet[2790]: E1213 14:08:32.989950 2790 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-559747f56b-6lsgx_calico-system(d5db41ac-60b1-4aef-8370-a52f5b42bc29)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-559747f56b-6lsgx_calico-system(d5db41ac-60b1-4aef-8370-a52f5b42bc29)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-559747f56b-6lsgx" podUID="d5db41ac-60b1-4aef-8370-a52f5b42bc29" Dec 13 14:08:33.005929 env[1588]: time="2024-12-13T14:08:33.005874596Z" level=error msg="Failed to destroy network for sandbox \"ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:08:33.006414 env[1588]: time="2024-12-13T14:08:33.006381140Z" level=error msg="encountered an error cleaning up failed sandbox \"ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:08:33.006543 env[1588]: time="2024-12-13T14:08:33.006517936Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6777fb8766-ckq89,Uid:4c011aa9-d666-4206-8289-8d5531610d0f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:08:33.006965 kubelet[2790]: E1213 14:08:33.006929 2790 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:08:33.007042 kubelet[2790]: E1213 14:08:33.006985 2790 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6777fb8766-ckq89" Dec 13 14:08:33.007042 kubelet[2790]: E1213 14:08:33.007006 2790 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6777fb8766-ckq89" Dec 13 14:08:33.007110 kubelet[2790]: E1213 14:08:33.007054 2790 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6777fb8766-ckq89_calico-apiserver(4c011aa9-d666-4206-8289-8d5531610d0f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6777fb8766-ckq89_calico-apiserver(4c011aa9-d666-4206-8289-8d5531610d0f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6777fb8766-ckq89" podUID="4c011aa9-d666-4206-8289-8d5531610d0f" Dec 13 14:08:33.077280 env[1588]: time="2024-12-13T14:08:33.077221653Z" level=error msg="Failed to destroy network for sandbox \"be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:08:33.077812 env[1588]: time="2024-12-13T14:08:33.077777716Z" level=error msg="encountered an error cleaning up failed sandbox \"be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:08:33.077950 env[1588]: time="2024-12-13T14:08:33.077923631Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9jgbk,Uid:e35f131e-6a5b-4f9b-80ee-8f99f7186350,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:08:33.078317 kubelet[2790]: E1213 14:08:33.078276 2790 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:08:33.078401 kubelet[2790]: E1213 14:08:33.078353 2790 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9jgbk" Dec 13 14:08:33.078401 kubelet[2790]: E1213 14:08:33.078384 2790 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9jgbk" Dec 13 14:08:33.078461 kubelet[2790]: E1213 14:08:33.078435 2790 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9jgbk_calico-system(e35f131e-6a5b-4f9b-80ee-8f99f7186350)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9jgbk_calico-system(e35f131e-6a5b-4f9b-80ee-8f99f7186350)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9jgbk" podUID="e35f131e-6a5b-4f9b-80ee-8f99f7186350" Dec 13 14:08:33.082400 env[1588]: time="2024-12-13T14:08:33.082350776Z" level=error msg="Failed to destroy network for sandbox \"ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:08:33.085202 env[1588]: time="2024-12-13T14:08:33.085155690Z" level=error msg="encountered an error cleaning up failed sandbox \"ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:08:33.085395 env[1588]: time="2024-12-13T14:08:33.085366003Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6777fb8766-cqmhg,Uid:26820ebc-4ace-4cbe-bb2e-a9e912553e07,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:08:33.086002 kubelet[2790]: E1213 14:08:33.085706 2790 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:08:33.086002 kubelet[2790]: E1213 14:08:33.085751 2790 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6777fb8766-cqmhg" Dec 13 14:08:33.086002 kubelet[2790]: E1213 14:08:33.085769 2790 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6777fb8766-cqmhg" Dec 13 14:08:33.086142 kubelet[2790]: E1213 14:08:33.085819 2790 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6777fb8766-cqmhg_calico-apiserver(26820ebc-4ace-4cbe-bb2e-a9e912553e07)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6777fb8766-cqmhg_calico-apiserver(26820ebc-4ace-4cbe-bb2e-a9e912553e07)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6777fb8766-cqmhg" podUID="26820ebc-4ace-4cbe-bb2e-a9e912553e07" Dec 13 14:08:33.099483 env[1588]: time="2024-12-13T14:08:33.099426413Z" level=error msg="Failed to destroy network for sandbox \"53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:08:33.100002 env[1588]: time="2024-12-13T14:08:33.099966797Z" level=error msg="encountered an error cleaning up failed sandbox \"53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:08:33.100123 env[1588]: time="2024-12-13T14:08:33.100097993Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-68bh6,Uid:63d4a6f1-19ab-4534-9a5d-579c3598a6da,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:08:33.100836 kubelet[2790]: E1213 14:08:33.100437 2790 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:08:33.100836 kubelet[2790]: E1213 14:08:33.100492 2790 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-68bh6" Dec 13 14:08:33.100836 kubelet[2790]: E1213 14:08:33.100514 2790 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-68bh6" Dec 13 14:08:33.100987 kubelet[2790]: E1213 14:08:33.100570 2790 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-68bh6_kube-system(63d4a6f1-19ab-4534-9a5d-579c3598a6da)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-68bh6_kube-system(63d4a6f1-19ab-4534-9a5d-579c3598a6da)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-68bh6" podUID="63d4a6f1-19ab-4534-9a5d-579c3598a6da" Dec 13 14:08:33.108949 env[1588]: time="2024-12-13T14:08:33.108895764Z" level=error msg="Failed to destroy network for sandbox \"0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:08:33.109417 env[1588]: time="2024-12-13T14:08:33.109384869Z" level=error msg="encountered an error cleaning up failed sandbox \"0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:08:33.109534 env[1588]: time="2024-12-13T14:08:33.109508105Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-gnqj2,Uid:2200e8b6-ef61-4e96-abba-b05c84f6a27d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:08:33.109898 kubelet[2790]: E1213 14:08:33.109867 2790 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:08:33.109974 kubelet[2790]: E1213 14:08:33.109938 2790 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-gnqj2" Dec 13 14:08:33.109974 kubelet[2790]: E1213 14:08:33.109963 2790 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-gnqj2" Dec 13 14:08:33.110045 kubelet[2790]: E1213 14:08:33.110025 2790 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-gnqj2_kube-system(2200e8b6-ef61-4e96-abba-b05c84f6a27d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-gnqj2_kube-system(2200e8b6-ef61-4e96-abba-b05c84f6a27d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-gnqj2" podUID="2200e8b6-ef61-4e96-abba-b05c84f6a27d" Dec 13 14:08:33.552057 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d-shm.mount: Deactivated successfully. Dec 13 14:08:33.552253 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1-shm.mount: Deactivated successfully. Dec 13 14:08:33.835452 kubelet[2790]: I1213 14:08:33.833103 2790 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a" Dec 13 14:08:33.835691 env[1588]: time="2024-12-13T14:08:33.833840623Z" level=info msg="StopPodSandbox for \"0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a\"" Dec 13 14:08:33.836354 kubelet[2790]: I1213 14:08:33.836327 2790 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0" Dec 13 14:08:33.839210 env[1588]: time="2024-12-13T14:08:33.838956626Z" level=info msg="StopPodSandbox for \"ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0\"" Dec 13 14:08:33.839937 kubelet[2790]: I1213 14:08:33.839910 2790 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a" Dec 13 14:08:33.841046 env[1588]: time="2024-12-13T14:08:33.840945885Z" level=info msg="StopPodSandbox for \"53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a\"" Dec 13 14:08:33.842437 kubelet[2790]: I1213 14:08:33.842396 2790 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d" Dec 13 14:08:33.842986 env[1588]: time="2024-12-13T14:08:33.842944984Z" level=info msg="StopPodSandbox for \"ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d\"" Dec 13 14:08:33.846656 kubelet[2790]: I1213 14:08:33.846591 2790 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1" Dec 13 14:08:33.847247 env[1588]: time="2024-12-13T14:08:33.847221133Z" level=info msg="StopPodSandbox for \"2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1\"" Dec 13 14:08:33.848977 kubelet[2790]: I1213 14:08:33.848513 2790 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531" Dec 13 14:08:33.849229 env[1588]: time="2024-12-13T14:08:33.849202953Z" level=info msg="StopPodSandbox for \"be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531\"" Dec 13 14:08:33.911249 env[1588]: time="2024-12-13T14:08:33.911195176Z" level=error msg="StopPodSandbox for \"2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1\" failed" error="failed to destroy network for sandbox \"2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:08:33.911641 kubelet[2790]: E1213 14:08:33.911615 2790 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1" Dec 13 14:08:33.911732 kubelet[2790]: E1213 14:08:33.911694 2790 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1"} Dec 13 14:08:33.911732 kubelet[2790]: E1213 14:08:33.911728 2790 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d5db41ac-60b1-4aef-8370-a52f5b42bc29\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 14:08:33.911822 kubelet[2790]: E1213 14:08:33.911760 2790 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d5db41ac-60b1-4aef-8370-a52f5b42bc29\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-559747f56b-6lsgx" podUID="d5db41ac-60b1-4aef-8370-a52f5b42bc29" Dec 13 14:08:33.914374 env[1588]: time="2024-12-13T14:08:33.914315680Z" level=error msg="StopPodSandbox for \"53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a\" failed" error="failed to destroy network for sandbox \"53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:08:33.914931 kubelet[2790]: E1213 14:08:33.914695 2790 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a" Dec 13 14:08:33.914931 kubelet[2790]: E1213 14:08:33.914740 2790 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a"} Dec 13 14:08:33.914931 kubelet[2790]: E1213 14:08:33.914773 2790 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"63d4a6f1-19ab-4534-9a5d-579c3598a6da\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 14:08:33.914931 kubelet[2790]: E1213 14:08:33.914799 2790 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"63d4a6f1-19ab-4534-9a5d-579c3598a6da\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-68bh6" podUID="63d4a6f1-19ab-4534-9a5d-579c3598a6da" Dec 13 14:08:33.924415 env[1588]: time="2024-12-13T14:08:33.924349253Z" level=error msg="StopPodSandbox for \"0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a\" failed" error="failed to destroy network for sandbox \"0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:08:33.924649 kubelet[2790]: E1213 14:08:33.924622 2790 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a" Dec 13 14:08:33.924730 kubelet[2790]: E1213 14:08:33.924672 2790 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a"} Dec 13 14:08:33.924730 kubelet[2790]: E1213 14:08:33.924707 2790 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2200e8b6-ef61-4e96-abba-b05c84f6a27d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 14:08:33.924807 kubelet[2790]: E1213 14:08:33.924734 2790 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2200e8b6-ef61-4e96-abba-b05c84f6a27d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-gnqj2" podUID="2200e8b6-ef61-4e96-abba-b05c84f6a27d" Dec 13 14:08:33.935472 env[1588]: time="2024-12-13T14:08:33.935396675Z" level=error msg="StopPodSandbox for \"ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0\" failed" error="failed to destroy network for sandbox \"ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:08:33.935892 kubelet[2790]: E1213 14:08:33.935841 2790 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0" Dec 13 14:08:33.935982 kubelet[2790]: E1213 14:08:33.935918 2790 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0"} Dec 13 14:08:33.936019 kubelet[2790]: E1213 14:08:33.935984 2790 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"26820ebc-4ace-4cbe-bb2e-a9e912553e07\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 14:08:33.936076 kubelet[2790]: E1213 14:08:33.936031 2790 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"26820ebc-4ace-4cbe-bb2e-a9e912553e07\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6777fb8766-cqmhg" podUID="26820ebc-4ace-4cbe-bb2e-a9e912553e07" Dec 13 14:08:33.945038 env[1588]: time="2024-12-13T14:08:33.944771709Z" level=error msg="StopPodSandbox for \"ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d\" failed" error="failed to destroy network for sandbox \"ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:08:33.945200 kubelet[2790]: E1213 14:08:33.945028 2790 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d" Dec 13 14:08:33.945200 kubelet[2790]: E1213 14:08:33.945065 2790 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d"} Dec 13 14:08:33.945200 kubelet[2790]: E1213 14:08:33.945106 2790 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4c011aa9-d666-4206-8289-8d5531610d0f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 14:08:33.945200 kubelet[2790]: E1213 14:08:33.945133 2790 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4c011aa9-d666-4206-8289-8d5531610d0f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6777fb8766-ckq89" podUID="4c011aa9-d666-4206-8289-8d5531610d0f" Dec 13 14:08:33.954453 env[1588]: time="2024-12-13T14:08:33.954395774Z" level=error msg="StopPodSandbox for \"be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531\" failed" error="failed to destroy network for sandbox \"be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:08:33.954921 kubelet[2790]: E1213 14:08:33.954894 2790 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531" Dec 13 14:08:33.955014 kubelet[2790]: E1213 14:08:33.954947 2790 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531"} Dec 13 14:08:33.955014 kubelet[2790]: E1213 14:08:33.954981 2790 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e35f131e-6a5b-4f9b-80ee-8f99f7186350\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 14:08:33.955014 kubelet[2790]: E1213 14:08:33.955009 2790 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e35f131e-6a5b-4f9b-80ee-8f99f7186350\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9jgbk" podUID="e35f131e-6a5b-4f9b-80ee-8f99f7186350" Dec 13 14:08:43.186075 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount343899153.mount: Deactivated successfully. Dec 13 14:08:44.669874 env[1588]: time="2024-12-13T14:08:44.669821718Z" level=info msg="StopPodSandbox for \"2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1\"" Dec 13 14:08:44.671194 env[1588]: time="2024-12-13T14:08:44.671167162Z" level=info msg="StopPodSandbox for \"be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531\"" Dec 13 14:08:44.699463 env[1588]: time="2024-12-13T14:08:44.699396791Z" level=error msg="StopPodSandbox for \"2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1\" failed" error="failed to destroy network for sandbox \"2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:08:44.699814 kubelet[2790]: E1213 14:08:44.699787 2790 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1" Dec 13 14:08:44.700158 kubelet[2790]: E1213 14:08:44.699843 2790 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1"} Dec 13 14:08:44.700158 kubelet[2790]: E1213 14:08:44.699881 2790 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d5db41ac-60b1-4aef-8370-a52f5b42bc29\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 14:08:44.700158 kubelet[2790]: E1213 14:08:44.699909 2790 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d5db41ac-60b1-4aef-8370-a52f5b42bc29\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-559747f56b-6lsgx" podUID="d5db41ac-60b1-4aef-8370-a52f5b42bc29" Dec 13 14:08:44.704073 env[1588]: time="2024-12-13T14:08:44.704031264Z" level=error msg="StopPodSandbox for \"be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531\" failed" error="failed to destroy network for sandbox \"be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:08:44.704369 kubelet[2790]: E1213 14:08:44.704327 2790 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531" Dec 13 14:08:44.704369 kubelet[2790]: E1213 14:08:44.704371 2790 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531"} Dec 13 14:08:44.704477 kubelet[2790]: E1213 14:08:44.704406 2790 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e35f131e-6a5b-4f9b-80ee-8f99f7186350\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 14:08:44.704477 kubelet[2790]: E1213 14:08:44.704430 2790 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e35f131e-6a5b-4f9b-80ee-8f99f7186350\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9jgbk" podUID="e35f131e-6a5b-4f9b-80ee-8f99f7186350" Dec 13 14:08:46.137218 kubelet[2790]: I1213 14:08:46.137190 2790 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 14:08:48.442459 kernel: kauditd_printk_skb: 8 callbacks suppressed Dec 13 14:08:48.442545 kernel: audit: type=1325 audit(1734098926.167:305): table=filter:98 family=2 entries=17 op=nft_register_rule pid=3875 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:08:48.442569 kernel: audit: type=1300 audit(1734098926.167:305): arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=ffffc4343660 a2=0 a3=1 items=0 ppid=2930 pid=3875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:48.442616 kernel: audit: type=1327 audit(1734098926.167:305): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:08:48.442641 kernel: audit: type=1325 audit(1734098926.188:306): table=nat:99 family=2 entries=19 op=nft_register_chain pid=3875 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:08:48.442661 kernel: audit: type=1300 audit(1734098926.188:306): arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffc4343660 a2=0 a3=1 items=0 ppid=2930 pid=3875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:48.442680 kernel: audit: type=1327 audit(1734098926.188:306): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:08:46.167000 audit[3875]: NETFILTER_CFG table=filter:98 family=2 entries=17 op=nft_register_rule pid=3875 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:08:46.167000 audit[3875]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=ffffc4343660 a2=0 a3=1 items=0 ppid=2930 pid=3875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:46.167000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:08:46.188000 audit[3875]: NETFILTER_CFG table=nat:99 family=2 entries=19 op=nft_register_chain pid=3875 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:08:46.188000 audit[3875]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffc4343660 a2=0 a3=1 items=0 ppid=2930 pid=3875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:46.188000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:08:48.489437 env[1588]: time="2024-12-13T14:08:46.669835670Z" level=info msg="StopPodSandbox for \"ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0\"" Dec 13 14:08:48.489437 env[1588]: time="2024-12-13T14:08:48.395196118Z" level=error msg="StopPodSandbox for \"ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0\" failed" error="failed to destroy network for sandbox \"ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:08:48.489743 kubelet[2790]: E1213 14:08:48.395417 2790 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0" Dec 13 14:08:48.489743 kubelet[2790]: E1213 14:08:48.395458 2790 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0"} Dec 13 14:08:48.489743 kubelet[2790]: E1213 14:08:48.395492 2790 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"26820ebc-4ace-4cbe-bb2e-a9e912553e07\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 14:08:48.489743 kubelet[2790]: E1213 14:08:48.395521 2790 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"26820ebc-4ace-4cbe-bb2e-a9e912553e07\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6777fb8766-cqmhg" podUID="26820ebc-4ace-4cbe-bb2e-a9e912553e07" Dec 13 14:08:48.670655 env[1588]: time="2024-12-13T14:08:48.670619454Z" level=info msg="StopPodSandbox for \"0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a\"" Dec 13 14:08:48.671428 env[1588]: time="2024-12-13T14:08:48.671020084Z" level=info msg="StopPodSandbox for \"ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d\"" Dec 13 14:08:48.671739 env[1588]: time="2024-12-13T14:08:48.671047243Z" level=info msg="StopPodSandbox for \"53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a\"" Dec 13 14:08:48.715700 env[1588]: time="2024-12-13T14:08:48.715612708Z" level=error msg="StopPodSandbox for \"53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a\" failed" error="failed to destroy network for sandbox \"53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:08:48.716111 kubelet[2790]: E1213 14:08:48.716084 2790 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a" Dec 13 14:08:48.716202 kubelet[2790]: E1213 14:08:48.716130 2790 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a"} Dec 13 14:08:48.716202 kubelet[2790]: E1213 14:08:48.716164 2790 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"63d4a6f1-19ab-4534-9a5d-579c3598a6da\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 14:08:48.716202 kubelet[2790]: E1213 14:08:48.716196 2790 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"63d4a6f1-19ab-4534-9a5d-579c3598a6da\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-68bh6" podUID="63d4a6f1-19ab-4534-9a5d-579c3598a6da" Dec 13 14:08:48.717294 env[1588]: time="2024-12-13T14:08:48.717249785Z" level=error msg="StopPodSandbox for \"ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d\" failed" error="failed to destroy network for sandbox \"ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:08:48.717522 kubelet[2790]: E1213 14:08:48.717499 2790 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d" Dec 13 14:08:48.717612 kubelet[2790]: E1213 14:08:48.717532 2790 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d"} Dec 13 14:08:48.717612 kubelet[2790]: E1213 14:08:48.717565 2790 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4c011aa9-d666-4206-8289-8d5531610d0f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 14:08:48.717612 kubelet[2790]: E1213 14:08:48.717588 2790 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4c011aa9-d666-4206-8289-8d5531610d0f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6777fb8766-ckq89" podUID="4c011aa9-d666-4206-8289-8d5531610d0f" Dec 13 14:08:48.718949 env[1588]: time="2024-12-13T14:08:48.718906181Z" level=error msg="StopPodSandbox for \"0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a\" failed" error="failed to destroy network for sandbox \"0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:08:48.719293 kubelet[2790]: E1213 14:08:48.719187 2790 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a" Dec 13 14:08:48.719293 kubelet[2790]: E1213 14:08:48.719213 2790 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a"} Dec 13 14:08:48.719293 kubelet[2790]: E1213 14:08:48.719244 2790 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2200e8b6-ef61-4e96-abba-b05c84f6a27d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 14:08:48.719293 kubelet[2790]: E1213 14:08:48.719278 2790 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2200e8b6-ef61-4e96-abba-b05c84f6a27d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-gnqj2" podUID="2200e8b6-ef61-4e96-abba-b05c84f6a27d" Dec 13 14:08:48.788576 env[1588]: time="2024-12-13T14:08:48.787681727Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:08:48.896468 env[1588]: time="2024-12-13T14:08:48.896410380Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:08:48.944377 env[1588]: time="2024-12-13T14:08:48.944309357Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:08:48.991017 env[1588]: time="2024-12-13T14:08:48.990960927Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:08:48.991712 env[1588]: time="2024-12-13T14:08:48.991670828Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Dec 13 14:08:49.009570 env[1588]: time="2024-12-13T14:08:49.009535359Z" level=info msg="CreateContainer within sandbox \"74ca0216c7e087ec913c2b73cbb9adf11f1c0ff466024f47e91cd4992ad3a2a7\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 14:08:49.205272 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount12676611.mount: Deactivated successfully. Dec 13 14:08:49.341002 env[1588]: time="2024-12-13T14:08:49.340880013Z" level=info msg="CreateContainer within sandbox \"74ca0216c7e087ec913c2b73cbb9adf11f1c0ff466024f47e91cd4992ad3a2a7\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ba62ab3582242843fc2979ed0b06c83b7d6b2b19e21ad25d5011cee87b9e8a42\"" Dec 13 14:08:49.341589 env[1588]: time="2024-12-13T14:08:49.341556795Z" level=info msg="StartContainer for \"ba62ab3582242843fc2979ed0b06c83b7d6b2b19e21ad25d5011cee87b9e8a42\"" Dec 13 14:08:49.408565 env[1588]: time="2024-12-13T14:08:49.408520964Z" level=info msg="StartContainer for \"ba62ab3582242843fc2979ed0b06c83b7d6b2b19e21ad25d5011cee87b9e8a42\" returns successfully" Dec 13 14:08:49.510669 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 14:08:49.510826 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 14:08:49.905344 kubelet[2790]: I1213 14:08:49.905195 2790 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-bnx9h" podStartSLOduration=1.558178976 podStartE2EDuration="28.905103216s" podCreationTimestamp="2024-12-13 14:08:21 +0000 UTC" firstStartedPulling="2024-12-13 14:08:21.645010341 +0000 UTC m=+23.114146725" lastFinishedPulling="2024-12-13 14:08:48.991934581 +0000 UTC m=+50.461070965" observedRunningTime="2024-12-13 14:08:49.90114196 +0000 UTC m=+51.370278344" watchObservedRunningTime="2024-12-13 14:08:49.905103216 +0000 UTC m=+51.374239600" Dec 13 14:08:50.780000 audit[4068]: AVC avc: denied { write } for pid=4068 comm="tee" name="fd" dev="proc" ino=25144 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 14:08:50.780000 audit[4068]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff9f7ba11 a2=241 a3=1b6 items=1 ppid=4054 pid=4068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:50.800773 kernel: audit: type=1400 audit(1734098930.780:307): avc: denied { write } for pid=4068 comm="tee" name="fd" dev="proc" ino=25144 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 14:08:50.780000 audit: CWD cwd="/etc/service/enabled/bird/log" Dec 13 14:08:50.833623 kernel: audit: type=1300 audit(1734098930.780:307): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff9f7ba11 a2=241 a3=1b6 items=1 ppid=4054 pid=4068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:50.833694 kernel: audit: type=1307 audit(1734098930.780:307): cwd="/etc/service/enabled/bird/log" Dec 13 14:08:50.780000 audit: PATH item=0 name="/dev/fd/63" inode=25873 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:08:50.850672 kernel: audit: type=1302 audit(1734098930.780:307): item=0 name="/dev/fd/63" inode=25873 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:08:50.780000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 14:08:50.794000 audit[4078]: AVC avc: denied { write } for pid=4078 comm="tee" name="fd" dev="proc" ino=25168 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 14:08:50.794000 audit[4078]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffe0215a00 a2=241 a3=1b6 items=1 ppid=4057 pid=4078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:50.794000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Dec 13 14:08:50.794000 audit: PATH item=0 name="/dev/fd/63" inode=25165 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:08:50.794000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 14:08:50.798000 audit[4075]: AVC avc: denied { write } for pid=4075 comm="tee" name="fd" dev="proc" ino=25172 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 14:08:50.798000 audit[4075]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffc0757a10 a2=241 a3=1b6 items=1 ppid=4065 pid=4075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:50.798000 audit: CWD cwd="/etc/service/enabled/confd/log" Dec 13 14:08:50.798000 audit: PATH item=0 name="/dev/fd/63" inode=25160 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:08:50.798000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 14:08:50.799000 audit[4081]: AVC avc: denied { write } for pid=4081 comm="tee" name="fd" dev="proc" ino=25880 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 14:08:50.799000 audit[4081]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffc3656a10 a2=241 a3=1b6 items=1 ppid=4055 pid=4081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:50.799000 audit: CWD cwd="/etc/service/enabled/bird6/log" Dec 13 14:08:50.799000 audit: PATH item=0 name="/dev/fd/63" inode=25876 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:08:50.799000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 14:08:50.800000 audit[4089]: AVC avc: denied { write } for pid=4089 comm="tee" name="fd" dev="proc" ino=25887 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 14:08:50.800000 audit[4089]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffcbe6fa10 a2=241 a3=1b6 items=1 ppid=4053 pid=4089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:50.800000 audit: CWD cwd="/etc/service/enabled/felix/log" Dec 13 14:08:50.800000 audit: PATH item=0 name="/dev/fd/63" inode=25882 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:08:50.800000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 14:08:50.800000 audit[4092]: AVC avc: denied { write } for pid=4092 comm="tee" name="fd" dev="proc" ino=25892 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 14:08:50.800000 audit[4092]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffd2135a12 a2=241 a3=1b6 items=1 ppid=4052 pid=4092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:50.800000 audit: CWD cwd="/etc/service/enabled/cni/log" Dec 13 14:08:50.800000 audit: PATH item=0 name="/dev/fd/63" inode=25889 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:08:50.800000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 14:08:50.806000 audit[4084]: AVC avc: denied { write } for pid=4084 comm="tee" name="fd" dev="proc" ino=25896 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 14:08:50.806000 audit[4084]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff84f9a01 a2=241 a3=1b6 items=1 ppid=4051 pid=4084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:50.806000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Dec 13 14:08:50.806000 audit: PATH item=0 name="/dev/fd/63" inode=25877 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:08:50.806000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 14:08:51.736000 audit[4149]: AVC avc: denied { bpf } for pid=4149 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.745421 kernel: kauditd_printk_skb: 31 callbacks suppressed Dec 13 14:08:51.745488 kernel: audit: type=1400 audit(1734098931.736:314): avc: denied { bpf } for pid=4149 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.736000 audit[4149]: AVC avc: denied { bpf } for pid=4149 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.779194 kernel: audit: type=1400 audit(1734098931.736:314): avc: denied { bpf } for pid=4149 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.779365 kernel: audit: type=1400 audit(1734098931.736:314): avc: denied { perfmon } for pid=4149 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.736000 audit[4149]: AVC avc: denied { perfmon } for pid=4149 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.736000 audit[4149]: AVC avc: denied { perfmon } for pid=4149 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.812080 kernel: audit: type=1400 audit(1734098931.736:314): avc: denied { perfmon } for pid=4149 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.812197 kernel: audit: type=1400 audit(1734098931.736:314): avc: denied { perfmon } for pid=4149 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.736000 audit[4149]: AVC avc: denied { perfmon } for pid=4149 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.736000 audit[4149]: AVC avc: denied { perfmon } for pid=4149 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.844487 kernel: audit: type=1400 audit(1734098931.736:314): avc: denied { perfmon } for pid=4149 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.736000 audit[4149]: AVC avc: denied { perfmon } for pid=4149 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.861400 kernel: audit: type=1400 audit(1734098931.736:314): avc: denied { perfmon } for pid=4149 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.736000 audit[4149]: AVC avc: denied { bpf } for pid=4149 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.877988 kernel: audit: type=1400 audit(1734098931.736:314): avc: denied { bpf } for pid=4149 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.878119 kernel: audit: type=1400 audit(1734098931.736:314): avc: denied { bpf } for pid=4149 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.736000 audit[4149]: AVC avc: denied { bpf } for pid=4149 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.736000 audit: BPF prog-id=10 op=LOAD Dec 13 14:08:51.901717 kernel: audit: type=1334 audit(1734098931.736:314): prog-id=10 op=LOAD Dec 13 14:08:51.736000 audit[4149]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc43c38b8 a2=98 a3=ffffc43c38a8 items=0 ppid=4118 pid=4149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:51.736000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:08:51.737000 audit: BPF prog-id=10 op=UNLOAD Dec 13 14:08:51.737000 audit[4149]: AVC avc: denied { bpf } for pid=4149 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.737000 audit[4149]: AVC avc: denied { bpf } for pid=4149 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.737000 audit[4149]: AVC avc: denied { perfmon } for pid=4149 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.737000 audit[4149]: AVC avc: denied { perfmon } for pid=4149 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.737000 audit[4149]: AVC avc: denied { perfmon } for pid=4149 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.737000 audit[4149]: AVC avc: denied { perfmon } for pid=4149 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.737000 audit[4149]: AVC avc: denied { perfmon } for pid=4149 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.737000 audit[4149]: AVC avc: denied { bpf } for pid=4149 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.737000 audit[4149]: AVC avc: denied { bpf } for pid=4149 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.737000 audit: BPF prog-id=11 op=LOAD Dec 13 14:08:51.737000 audit[4149]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc43c3548 a2=74 a3=95 items=0 ppid=4118 pid=4149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:51.737000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:08:51.737000 audit: BPF prog-id=11 op=UNLOAD Dec 13 14:08:51.737000 audit[4149]: AVC avc: denied { bpf } for pid=4149 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.737000 audit[4149]: AVC avc: denied { bpf } for pid=4149 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.737000 audit[4149]: AVC avc: denied { perfmon } for pid=4149 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.737000 audit[4149]: AVC avc: denied { perfmon } for pid=4149 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.737000 audit[4149]: AVC avc: denied { perfmon } for pid=4149 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.737000 audit[4149]: AVC avc: denied { perfmon } for pid=4149 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.737000 audit[4149]: AVC avc: denied { perfmon } for pid=4149 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.737000 audit[4149]: AVC avc: denied { bpf } for pid=4149 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.737000 audit[4149]: AVC avc: denied { bpf } for pid=4149 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.737000 audit: BPF prog-id=12 op=LOAD Dec 13 14:08:51.737000 audit[4149]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc43c35a8 a2=94 a3=2 items=0 ppid=4118 pid=4149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:51.737000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:08:51.737000 audit: BPF prog-id=12 op=UNLOAD Dec 13 14:08:51.844000 audit[4149]: AVC avc: denied { bpf } for pid=4149 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.844000 audit[4149]: AVC avc: denied { bpf } for pid=4149 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.844000 audit[4149]: AVC avc: denied { perfmon } for pid=4149 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.844000 audit[4149]: AVC avc: denied { perfmon } for pid=4149 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.844000 audit[4149]: AVC avc: denied { perfmon } for pid=4149 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.844000 audit[4149]: AVC avc: denied { perfmon } for pid=4149 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.844000 audit[4149]: AVC avc: denied { perfmon } for pid=4149 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.844000 audit[4149]: AVC avc: denied { bpf } for pid=4149 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.844000 audit[4149]: AVC avc: denied { bpf } for pid=4149 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.844000 audit: BPF prog-id=13 op=LOAD Dec 13 14:08:51.844000 audit[4149]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc43c3568 a2=40 a3=ffffc43c3598 items=0 ppid=4118 pid=4149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:51.844000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:08:51.861000 audit: BPF prog-id=13 op=UNLOAD Dec 13 14:08:51.861000 audit[4149]: AVC avc: denied { perfmon } for pid=4149 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.861000 audit[4149]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=0 a1=ffffc43c3680 a2=50 a3=0 items=0 ppid=4118 pid=4149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:51.861000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:08:51.869000 audit[4149]: AVC avc: denied { bpf } for pid=4149 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.869000 audit[4149]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffc43c35d8 a2=28 a3=ffffc43c3708 items=0 ppid=4118 pid=4149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:51.869000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:08:51.869000 audit[4149]: AVC avc: denied { bpf } for pid=4149 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.869000 audit[4149]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc43c3608 a2=28 a3=ffffc43c3738 items=0 ppid=4118 pid=4149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:51.869000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:08:51.869000 audit[4149]: AVC avc: denied { bpf } for pid=4149 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.869000 audit[4149]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc43c34b8 a2=28 a3=ffffc43c35e8 items=0 ppid=4118 pid=4149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:51.869000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:08:51.869000 audit[4149]: AVC avc: denied { bpf } for pid=4149 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.869000 audit[4149]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffc43c3628 a2=28 a3=ffffc43c3758 items=0 ppid=4118 pid=4149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:51.869000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:08:51.902000 audit[4149]: AVC avc: denied { bpf } for pid=4149 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.902000 audit[4149]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffc43c3608 a2=28 a3=ffffc43c3738 items=0 ppid=4118 pid=4149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:51.902000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:08:51.902000 audit[4149]: AVC avc: denied { bpf } for pid=4149 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.902000 audit[4149]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffc43c35f8 a2=28 a3=ffffc43c3728 items=0 ppid=4118 pid=4149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:51.902000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:08:51.902000 audit[4149]: AVC avc: denied { bpf } for pid=4149 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.902000 audit[4149]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffc43c3628 a2=28 a3=ffffc43c3758 items=0 ppid=4118 pid=4149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:51.902000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:08:51.902000 audit[4149]: AVC avc: denied { bpf } for pid=4149 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.902000 audit[4149]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc43c3608 a2=28 a3=ffffc43c3738 items=0 ppid=4118 pid=4149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:51.902000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:08:51.902000 audit[4149]: AVC avc: denied { bpf } for pid=4149 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.902000 audit[4149]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc43c3628 a2=28 a3=ffffc43c3758 items=0 ppid=4118 pid=4149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:51.902000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:08:51.902000 audit[4149]: AVC avc: denied { bpf } for pid=4149 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.902000 audit[4149]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc43c35f8 a2=28 a3=ffffc43c3728 items=0 ppid=4118 pid=4149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:51.902000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:08:51.903000 audit[4149]: AVC avc: denied { bpf } for pid=4149 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.903000 audit[4149]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffc43c3678 a2=28 a3=ffffc43c37b8 items=0 ppid=4118 pid=4149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:51.903000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:08:51.903000 audit[4149]: AVC avc: denied { perfmon } for pid=4149 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.903000 audit[4149]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffc43c33b0 a2=50 a3=0 items=0 ppid=4118 pid=4149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:51.903000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:08:51.903000 audit[4149]: AVC avc: denied { bpf } for pid=4149 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.903000 audit[4149]: AVC avc: denied { bpf } for pid=4149 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.903000 audit[4149]: AVC avc: denied { perfmon } for pid=4149 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.903000 audit[4149]: AVC avc: denied { perfmon } for pid=4149 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.903000 audit[4149]: AVC avc: denied { perfmon } for pid=4149 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.903000 audit[4149]: AVC avc: denied { perfmon } for pid=4149 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.903000 audit[4149]: AVC avc: denied { perfmon } for pid=4149 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.903000 audit[4149]: AVC avc: denied { bpf } for pid=4149 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.903000 audit[4149]: AVC avc: denied { bpf } for pid=4149 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.903000 audit: BPF prog-id=14 op=LOAD Dec 13 14:08:51.903000 audit[4149]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc43c33b8 a2=94 a3=5 items=0 ppid=4118 pid=4149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:51.903000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:08:51.903000 audit: BPF prog-id=14 op=UNLOAD Dec 13 14:08:51.903000 audit[4149]: AVC avc: denied { perfmon } for pid=4149 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.903000 audit[4149]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffc43c34c0 a2=50 a3=0 items=0 ppid=4118 pid=4149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:51.903000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:08:51.903000 audit[4149]: AVC avc: denied { bpf } for pid=4149 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.903000 audit[4149]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=16 a1=ffffc43c3608 a2=4 a3=3 items=0 ppid=4118 pid=4149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:51.903000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:08:51.903000 audit[4149]: AVC avc: denied { bpf } for pid=4149 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.903000 audit[4149]: AVC avc: denied { bpf } for pid=4149 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.903000 audit[4149]: AVC avc: denied { perfmon } for pid=4149 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.903000 audit[4149]: AVC avc: denied { bpf } for pid=4149 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.903000 audit[4149]: AVC avc: denied { perfmon } for pid=4149 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.903000 audit[4149]: AVC avc: denied { perfmon } for pid=4149 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.903000 audit[4149]: AVC avc: denied { perfmon } for pid=4149 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.903000 audit[4149]: AVC avc: denied { perfmon } for pid=4149 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.903000 audit[4149]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffc43c35e8 a2=94 a3=6 items=0 ppid=4118 pid=4149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:51.903000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:08:51.905000 audit[4149]: AVC avc: denied { bpf } for pid=4149 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.905000 audit[4149]: AVC avc: denied { bpf } for pid=4149 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.905000 audit[4149]: AVC avc: denied { perfmon } for pid=4149 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.905000 audit[4149]: AVC avc: denied { bpf } for pid=4149 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.905000 audit[4149]: AVC avc: denied { perfmon } for pid=4149 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.905000 audit[4149]: AVC avc: denied { perfmon } for pid=4149 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.905000 audit[4149]: AVC avc: denied { perfmon } for pid=4149 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.905000 audit[4149]: AVC avc: denied { perfmon } for pid=4149 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.905000 audit[4149]: AVC avc: denied { perfmon } for pid=4149 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.905000 audit[4149]: AVC avc: denied { bpf } for pid=4149 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.905000 audit[4149]: AVC avc: denied { confidentiality } for pid=4149 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 14:08:51.905000 audit[4149]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffc43c2db8 a2=94 a3=83 items=0 ppid=4118 pid=4149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:51.905000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:08:51.906000 audit[4149]: AVC avc: denied { bpf } for pid=4149 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.906000 audit[4149]: AVC avc: denied { bpf } for pid=4149 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.906000 audit[4149]: AVC avc: denied { perfmon } for pid=4149 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.906000 audit[4149]: AVC avc: denied { bpf } for pid=4149 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.906000 audit[4149]: AVC avc: denied { perfmon } for pid=4149 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.906000 audit[4149]: AVC avc: denied { perfmon } for pid=4149 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.906000 audit[4149]: AVC avc: denied { perfmon } for pid=4149 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.906000 audit[4149]: AVC avc: denied { perfmon } for pid=4149 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.906000 audit[4149]: AVC avc: denied { perfmon } for pid=4149 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.906000 audit[4149]: AVC avc: denied { bpf } for pid=4149 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.906000 audit[4149]: AVC avc: denied { confidentiality } for pid=4149 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 14:08:51.906000 audit[4149]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffc43c2db8 a2=94 a3=83 items=0 ppid=4118 pid=4149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:51.906000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:08:51.915000 audit[4153]: AVC avc: denied { bpf } for pid=4153 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.915000 audit[4153]: AVC avc: denied { bpf } for pid=4153 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.915000 audit[4153]: AVC avc: denied { perfmon } for pid=4153 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.915000 audit[4153]: AVC avc: denied { perfmon } for pid=4153 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.915000 audit[4153]: AVC avc: denied { perfmon } for pid=4153 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.915000 audit[4153]: AVC avc: denied { perfmon } for pid=4153 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.915000 audit[4153]: AVC avc: denied { perfmon } for pid=4153 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.915000 audit[4153]: AVC avc: denied { bpf } for pid=4153 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.915000 audit[4153]: AVC avc: denied { bpf } for pid=4153 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.915000 audit: BPF prog-id=15 op=LOAD Dec 13 14:08:51.915000 audit[4153]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff5a5ea48 a2=98 a3=fffff5a5ea38 items=0 ppid=4118 pid=4153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:51.915000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 14:08:51.915000 audit: BPF prog-id=15 op=UNLOAD Dec 13 14:08:51.916000 audit[4153]: AVC avc: denied { bpf } for pid=4153 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.916000 audit[4153]: AVC avc: denied { bpf } for pid=4153 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.916000 audit[4153]: AVC avc: denied { perfmon } for pid=4153 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.916000 audit[4153]: AVC avc: denied { perfmon } for pid=4153 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.916000 audit[4153]: AVC avc: denied { perfmon } for pid=4153 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.916000 audit[4153]: AVC avc: denied { perfmon } for pid=4153 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.916000 audit[4153]: AVC avc: denied { perfmon } for pid=4153 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.916000 audit[4153]: AVC avc: denied { bpf } for pid=4153 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.916000 audit[4153]: AVC avc: denied { bpf } for pid=4153 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.916000 audit: BPF prog-id=16 op=LOAD Dec 13 14:08:51.916000 audit[4153]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff5a5e8f8 a2=74 a3=95 items=0 ppid=4118 pid=4153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:51.916000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 14:08:51.916000 audit: BPF prog-id=16 op=UNLOAD Dec 13 14:08:51.916000 audit[4153]: AVC avc: denied { bpf } for pid=4153 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.916000 audit[4153]: AVC avc: denied { bpf } for pid=4153 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.916000 audit[4153]: AVC avc: denied { perfmon } for pid=4153 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.916000 audit[4153]: AVC avc: denied { perfmon } for pid=4153 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.916000 audit[4153]: AVC avc: denied { perfmon } for pid=4153 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.916000 audit[4153]: AVC avc: denied { perfmon } for pid=4153 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.916000 audit[4153]: AVC avc: denied { perfmon } for pid=4153 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.916000 audit[4153]: AVC avc: denied { bpf } for pid=4153 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.916000 audit[4153]: AVC avc: denied { bpf } for pid=4153 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:51.916000 audit: BPF prog-id=17 op=LOAD Dec 13 14:08:51.916000 audit[4153]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff5a5e928 a2=40 a3=fffff5a5e958 items=0 ppid=4118 pid=4153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:51.916000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 14:08:51.917000 audit: BPF prog-id=17 op=UNLOAD Dec 13 14:08:52.959030 systemd-networkd[1760]: vxlan.calico: Link UP Dec 13 14:08:52.959039 systemd-networkd[1760]: vxlan.calico: Gained carrier Dec 13 14:08:53.020000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.020000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.020000 audit[4196]: AVC avc: denied { perfmon } for pid=4196 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.020000 audit[4196]: AVC avc: denied { perfmon } for pid=4196 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.020000 audit[4196]: AVC avc: denied { perfmon } for pid=4196 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.020000 audit[4196]: AVC avc: denied { perfmon } for pid=4196 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.020000 audit[4196]: AVC avc: denied { perfmon } for pid=4196 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.020000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.020000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.020000 audit: BPF prog-id=18 op=LOAD Dec 13 14:08:53.020000 audit[4196]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffece53af8 a2=98 a3=ffffece53ae8 items=0 ppid=4118 pid=4196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:53.020000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:08:53.021000 audit: BPF prog-id=18 op=UNLOAD Dec 13 14:08:53.021000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.021000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.021000 audit[4196]: AVC avc: denied { perfmon } for pid=4196 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.021000 audit[4196]: AVC avc: denied { perfmon } for pid=4196 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.021000 audit[4196]: AVC avc: denied { perfmon } for pid=4196 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.021000 audit[4196]: AVC avc: denied { perfmon } for pid=4196 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.021000 audit[4196]: AVC avc: denied { perfmon } for pid=4196 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.021000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.021000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.021000 audit: BPF prog-id=19 op=LOAD Dec 13 14:08:53.021000 audit[4196]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffece537d8 a2=74 a3=95 items=0 ppid=4118 pid=4196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:53.021000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:08:53.021000 audit: BPF prog-id=19 op=UNLOAD Dec 13 14:08:53.021000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.021000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.021000 audit[4196]: AVC avc: denied { perfmon } for pid=4196 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.021000 audit[4196]: AVC avc: denied { perfmon } for pid=4196 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.021000 audit[4196]: AVC avc: denied { perfmon } for pid=4196 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.021000 audit[4196]: AVC avc: denied { perfmon } for pid=4196 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.021000 audit[4196]: AVC avc: denied { perfmon } for pid=4196 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.021000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.021000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.021000 audit: BPF prog-id=20 op=LOAD Dec 13 14:08:53.021000 audit[4196]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffece53838 a2=94 a3=2 items=0 ppid=4118 pid=4196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:53.021000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:08:53.021000 audit: BPF prog-id=20 op=UNLOAD Dec 13 14:08:53.021000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.021000 audit[4196]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffece53868 a2=28 a3=ffffece53998 items=0 ppid=4118 pid=4196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:53.021000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:08:53.021000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.021000 audit[4196]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffece53898 a2=28 a3=ffffece539c8 items=0 ppid=4118 pid=4196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:53.021000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:08:53.021000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.021000 audit[4196]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffece53748 a2=28 a3=ffffece53878 items=0 ppid=4118 pid=4196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:53.021000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:08:53.021000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.021000 audit[4196]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffece538b8 a2=28 a3=ffffece539e8 items=0 ppid=4118 pid=4196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:53.021000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:08:53.021000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.021000 audit[4196]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffece53898 a2=28 a3=ffffece539c8 items=0 ppid=4118 pid=4196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:53.021000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:08:53.022000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.022000 audit[4196]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffece53888 a2=28 a3=ffffece539b8 items=0 ppid=4118 pid=4196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:53.022000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:08:53.022000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.022000 audit[4196]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffece538b8 a2=28 a3=ffffece539e8 items=0 ppid=4118 pid=4196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:53.022000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:08:53.022000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.022000 audit[4196]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffece53898 a2=28 a3=ffffece539c8 items=0 ppid=4118 pid=4196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:53.022000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:08:53.022000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.022000 audit[4196]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffece538b8 a2=28 a3=ffffece539e8 items=0 ppid=4118 pid=4196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:53.022000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:08:53.022000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.022000 audit[4196]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffece53888 a2=28 a3=ffffece539b8 items=0 ppid=4118 pid=4196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:53.022000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:08:53.022000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.022000 audit[4196]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffece53908 a2=28 a3=ffffece53a48 items=0 ppid=4118 pid=4196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:53.022000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:08:53.022000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.022000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.022000 audit[4196]: AVC avc: denied { perfmon } for pid=4196 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.022000 audit[4196]: AVC avc: denied { perfmon } for pid=4196 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.022000 audit[4196]: AVC avc: denied { perfmon } for pid=4196 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.022000 audit[4196]: AVC avc: denied { perfmon } for pid=4196 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.022000 audit[4196]: AVC avc: denied { perfmon } for pid=4196 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.022000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.022000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.022000 audit: BPF prog-id=21 op=LOAD Dec 13 14:08:53.022000 audit[4196]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffece53728 a2=40 a3=ffffece53758 items=0 ppid=4118 pid=4196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:53.022000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:08:53.022000 audit: BPF prog-id=21 op=UNLOAD Dec 13 14:08:53.023000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.023000 audit[4196]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=0 a1=ffffece53750 a2=50 a3=0 items=0 ppid=4118 pid=4196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:53.023000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:08:53.023000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.023000 audit[4196]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=0 a1=ffffece53750 a2=50 a3=0 items=0 ppid=4118 pid=4196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:53.023000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:08:53.023000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.023000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.023000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.023000 audit[4196]: AVC avc: denied { perfmon } for pid=4196 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.023000 audit[4196]: AVC avc: denied { perfmon } for pid=4196 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.023000 audit[4196]: AVC avc: denied { perfmon } for pid=4196 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.023000 audit[4196]: AVC avc: denied { perfmon } for pid=4196 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.023000 audit[4196]: AVC avc: denied { perfmon } for pid=4196 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.023000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.023000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.023000 audit: BPF prog-id=22 op=LOAD Dec 13 14:08:53.023000 audit[4196]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffece52eb8 a2=94 a3=2 items=0 ppid=4118 pid=4196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:53.023000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:08:53.024000 audit: BPF prog-id=22 op=UNLOAD Dec 13 14:08:53.024000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.024000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.024000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.024000 audit[4196]: AVC avc: denied { perfmon } for pid=4196 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.024000 audit[4196]: AVC avc: denied { perfmon } for pid=4196 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.024000 audit[4196]: AVC avc: denied { perfmon } for pid=4196 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.024000 audit[4196]: AVC avc: denied { perfmon } for pid=4196 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.024000 audit[4196]: AVC avc: denied { perfmon } for pid=4196 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.024000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.024000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.024000 audit: BPF prog-id=23 op=LOAD Dec 13 14:08:53.024000 audit[4196]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffece53048 a2=94 a3=2d items=0 ppid=4118 pid=4196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:53.024000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:08:53.031000 audit[4199]: AVC avc: denied { bpf } for pid=4199 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.031000 audit[4199]: AVC avc: denied { bpf } for pid=4199 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.031000 audit[4199]: AVC avc: denied { perfmon } for pid=4199 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.031000 audit[4199]: AVC avc: denied { perfmon } for pid=4199 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.031000 audit[4199]: AVC avc: denied { perfmon } for pid=4199 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.031000 audit[4199]: AVC avc: denied { perfmon } for pid=4199 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.031000 audit[4199]: AVC avc: denied { perfmon } for pid=4199 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.031000 audit[4199]: AVC avc: denied { bpf } for pid=4199 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.031000 audit[4199]: AVC avc: denied { bpf } for pid=4199 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.031000 audit: BPF prog-id=24 op=LOAD Dec 13 14:08:53.031000 audit[4199]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffcd4eaee8 a2=98 a3=ffffcd4eaed8 items=0 ppid=4118 pid=4199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:53.031000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:08:53.031000 audit: BPF prog-id=24 op=UNLOAD Dec 13 14:08:53.031000 audit[4199]: AVC avc: denied { bpf } for pid=4199 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.031000 audit[4199]: AVC avc: denied { bpf } for pid=4199 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.031000 audit[4199]: AVC avc: denied { perfmon } for pid=4199 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.031000 audit[4199]: AVC avc: denied { perfmon } for pid=4199 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.031000 audit[4199]: AVC avc: denied { perfmon } for pid=4199 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.031000 audit[4199]: AVC avc: denied { perfmon } for pid=4199 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.031000 audit[4199]: AVC avc: denied { perfmon } for pid=4199 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.031000 audit[4199]: AVC avc: denied { bpf } for pid=4199 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.031000 audit[4199]: AVC avc: denied { bpf } for pid=4199 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.031000 audit: BPF prog-id=25 op=LOAD Dec 13 14:08:53.031000 audit[4199]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffcd4eab78 a2=74 a3=95 items=0 ppid=4118 pid=4199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:53.031000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:08:53.031000 audit: BPF prog-id=25 op=UNLOAD Dec 13 14:08:53.031000 audit[4199]: AVC avc: denied { bpf } for pid=4199 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.031000 audit[4199]: AVC avc: denied { bpf } for pid=4199 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.031000 audit[4199]: AVC avc: denied { perfmon } for pid=4199 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.031000 audit[4199]: AVC avc: denied { perfmon } for pid=4199 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.031000 audit[4199]: AVC avc: denied { perfmon } for pid=4199 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.031000 audit[4199]: AVC avc: denied { perfmon } for pid=4199 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.031000 audit[4199]: AVC avc: denied { perfmon } for pid=4199 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.031000 audit[4199]: AVC avc: denied { bpf } for pid=4199 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.031000 audit[4199]: AVC avc: denied { bpf } for pid=4199 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.031000 audit: BPF prog-id=26 op=LOAD Dec 13 14:08:53.031000 audit[4199]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffcd4eabd8 a2=94 a3=2 items=0 ppid=4118 pid=4199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:53.031000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:08:53.031000 audit: BPF prog-id=26 op=UNLOAD Dec 13 14:08:53.196000 audit[4199]: AVC avc: denied { bpf } for pid=4199 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.196000 audit[4199]: AVC avc: denied { bpf } for pid=4199 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.196000 audit[4199]: AVC avc: denied { perfmon } for pid=4199 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.196000 audit[4199]: AVC avc: denied { perfmon } for pid=4199 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.196000 audit[4199]: AVC avc: denied { perfmon } for pid=4199 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.196000 audit[4199]: AVC avc: denied { perfmon } for pid=4199 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.196000 audit[4199]: AVC avc: denied { perfmon } for pid=4199 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.196000 audit[4199]: AVC avc: denied { bpf } for pid=4199 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.196000 audit[4199]: AVC avc: denied { bpf } for pid=4199 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.196000 audit: BPF prog-id=27 op=LOAD Dec 13 14:08:53.196000 audit[4199]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffcd4eab98 a2=40 a3=ffffcd4eabc8 items=0 ppid=4118 pid=4199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:53.196000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:08:53.196000 audit: BPF prog-id=27 op=UNLOAD Dec 13 14:08:53.196000 audit[4199]: AVC avc: denied { perfmon } for pid=4199 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.196000 audit[4199]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=0 a1=ffffcd4eacb0 a2=50 a3=0 items=0 ppid=4118 pid=4199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:53.196000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:08:53.207000 audit[4199]: AVC avc: denied { bpf } for pid=4199 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.207000 audit[4199]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffcd4eac08 a2=28 a3=ffffcd4ead38 items=0 ppid=4118 pid=4199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:53.207000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:08:53.207000 audit[4199]: AVC avc: denied { bpf } for pid=4199 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.207000 audit[4199]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcd4eac38 a2=28 a3=ffffcd4ead68 items=0 ppid=4118 pid=4199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:53.207000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:08:53.207000 audit[4199]: AVC avc: denied { bpf } for pid=4199 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.207000 audit[4199]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcd4eaae8 a2=28 a3=ffffcd4eac18 items=0 ppid=4118 pid=4199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:53.207000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:08:53.207000 audit[4199]: AVC avc: denied { bpf } for pid=4199 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.207000 audit[4199]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffcd4eac58 a2=28 a3=ffffcd4ead88 items=0 ppid=4118 pid=4199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:53.207000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:08:53.207000 audit[4199]: AVC avc: denied { bpf } for pid=4199 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.207000 audit[4199]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffcd4eac38 a2=28 a3=ffffcd4ead68 items=0 ppid=4118 pid=4199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:53.207000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:08:53.207000 audit[4199]: AVC avc: denied { bpf } for pid=4199 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.207000 audit[4199]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffcd4eac28 a2=28 a3=ffffcd4ead58 items=0 ppid=4118 pid=4199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:53.207000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:08:53.207000 audit[4199]: AVC avc: denied { bpf } for pid=4199 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.207000 audit[4199]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffcd4eac58 a2=28 a3=ffffcd4ead88 items=0 ppid=4118 pid=4199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:53.207000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:08:53.207000 audit[4199]: AVC avc: denied { bpf } for pid=4199 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.207000 audit[4199]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcd4eac38 a2=28 a3=ffffcd4ead68 items=0 ppid=4118 pid=4199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:53.207000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:08:53.207000 audit[4199]: AVC avc: denied { bpf } for pid=4199 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.207000 audit[4199]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcd4eac58 a2=28 a3=ffffcd4ead88 items=0 ppid=4118 pid=4199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:53.207000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:08:53.207000 audit[4199]: AVC avc: denied { bpf } for pid=4199 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.207000 audit[4199]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcd4eac28 a2=28 a3=ffffcd4ead58 items=0 ppid=4118 pid=4199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:53.207000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:08:53.207000 audit[4199]: AVC avc: denied { bpf } for pid=4199 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.207000 audit[4199]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffcd4eaca8 a2=28 a3=ffffcd4eade8 items=0 ppid=4118 pid=4199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:53.207000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:08:53.207000 audit[4199]: AVC avc: denied { perfmon } for pid=4199 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.207000 audit[4199]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffcd4ea9e0 a2=50 a3=0 items=0 ppid=4118 pid=4199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:53.207000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:08:53.207000 audit[4199]: AVC avc: denied { bpf } for pid=4199 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.207000 audit[4199]: AVC avc: denied { bpf } for pid=4199 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.207000 audit[4199]: AVC avc: denied { perfmon } for pid=4199 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.207000 audit[4199]: AVC avc: denied { perfmon } for pid=4199 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.207000 audit[4199]: AVC avc: denied { perfmon } for pid=4199 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.207000 audit[4199]: AVC avc: denied { perfmon } for pid=4199 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.207000 audit[4199]: AVC avc: denied { perfmon } for pid=4199 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.207000 audit[4199]: AVC avc: denied { bpf } for pid=4199 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.207000 audit[4199]: AVC avc: denied { bpf } for pid=4199 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.207000 audit: BPF prog-id=28 op=LOAD Dec 13 14:08:53.207000 audit[4199]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffcd4ea9e8 a2=94 a3=5 items=0 ppid=4118 pid=4199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:53.207000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:08:53.207000 audit: BPF prog-id=28 op=UNLOAD Dec 13 14:08:53.207000 audit[4199]: AVC avc: denied { perfmon } for pid=4199 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.207000 audit[4199]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffcd4eaaf0 a2=50 a3=0 items=0 ppid=4118 pid=4199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:53.207000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:08:53.207000 audit[4199]: AVC avc: denied { bpf } for pid=4199 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.207000 audit[4199]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=16 a1=ffffcd4eac38 a2=4 a3=3 items=0 ppid=4118 pid=4199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:53.207000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:08:53.207000 audit[4199]: AVC avc: denied { bpf } for pid=4199 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.207000 audit[4199]: AVC avc: denied { bpf } for pid=4199 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.207000 audit[4199]: AVC avc: denied { perfmon } for pid=4199 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.207000 audit[4199]: AVC avc: denied { bpf } for pid=4199 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.207000 audit[4199]: AVC avc: denied { perfmon } for pid=4199 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.207000 audit[4199]: AVC avc: denied { perfmon } for pid=4199 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.207000 audit[4199]: AVC avc: denied { perfmon } for pid=4199 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.207000 audit[4199]: AVC avc: denied { perfmon } for pid=4199 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.207000 audit[4199]: AVC avc: denied { perfmon } for pid=4199 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.207000 audit[4199]: AVC avc: denied { bpf } for pid=4199 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.207000 audit[4199]: AVC avc: denied { confidentiality } for pid=4199 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 14:08:53.207000 audit[4199]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffcd4eac18 a2=94 a3=6 items=0 ppid=4118 pid=4199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:53.207000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:08:53.207000 audit[4199]: AVC avc: denied { bpf } for pid=4199 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.207000 audit[4199]: AVC avc: denied { bpf } for pid=4199 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.207000 audit[4199]: AVC avc: denied { perfmon } for pid=4199 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.207000 audit[4199]: AVC avc: denied { bpf } for pid=4199 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.207000 audit[4199]: AVC avc: denied { perfmon } for pid=4199 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.207000 audit[4199]: AVC avc: denied { perfmon } for pid=4199 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.207000 audit[4199]: AVC avc: denied { perfmon } for pid=4199 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.207000 audit[4199]: AVC avc: denied { perfmon } for pid=4199 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.207000 audit[4199]: AVC avc: denied { perfmon } for pid=4199 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.207000 audit[4199]: AVC avc: denied { bpf } for pid=4199 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.207000 audit[4199]: AVC avc: denied { confidentiality } for pid=4199 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 14:08:53.207000 audit[4199]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffcd4ea3e8 a2=94 a3=83 items=0 ppid=4118 pid=4199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:53.207000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:08:53.208000 audit[4199]: AVC avc: denied { bpf } for pid=4199 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.208000 audit[4199]: AVC avc: denied { bpf } for pid=4199 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.208000 audit[4199]: AVC avc: denied { perfmon } for pid=4199 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.208000 audit[4199]: AVC avc: denied { bpf } for pid=4199 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.208000 audit[4199]: AVC avc: denied { perfmon } for pid=4199 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.208000 audit[4199]: AVC avc: denied { perfmon } for pid=4199 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.208000 audit[4199]: AVC avc: denied { perfmon } for pid=4199 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.208000 audit[4199]: AVC avc: denied { perfmon } for pid=4199 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.208000 audit[4199]: AVC avc: denied { perfmon } for pid=4199 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.208000 audit[4199]: AVC avc: denied { bpf } for pid=4199 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.208000 audit[4199]: AVC avc: denied { confidentiality } for pid=4199 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 14:08:53.208000 audit[4199]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffcd4ea3e8 a2=94 a3=83 items=0 ppid=4118 pid=4199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:53.208000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:08:53.208000 audit[4199]: AVC avc: denied { bpf } for pid=4199 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.208000 audit[4199]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffcd4ebe28 a2=10 a3=ffffcd4ebf18 items=0 ppid=4118 pid=4199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:53.208000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:08:53.208000 audit[4199]: AVC avc: denied { bpf } for pid=4199 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.208000 audit[4199]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffcd4ebce8 a2=10 a3=ffffcd4ebdd8 items=0 ppid=4118 pid=4199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:53.208000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:08:53.208000 audit[4199]: AVC avc: denied { bpf } for pid=4199 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.208000 audit[4199]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffcd4ebc58 a2=10 a3=ffffcd4ebdd8 items=0 ppid=4118 pid=4199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:53.208000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:08:53.208000 audit[4199]: AVC avc: denied { bpf } for pid=4199 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:08:53.208000 audit[4199]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffcd4ebc58 a2=10 a3=ffffcd4ebdd8 items=0 ppid=4118 pid=4199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:53.208000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:08:53.219000 audit: BPF prog-id=23 op=UNLOAD Dec 13 14:08:53.502000 audit[4243]: NETFILTER_CFG table=mangle:100 family=2 entries=16 op=nft_register_chain pid=4243 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:08:53.502000 audit[4243]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffe414ea70 a2=0 a3=ffffa5a95fa8 items=0 ppid=4118 pid=4243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:53.502000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:08:53.516000 audit[4244]: NETFILTER_CFG table=nat:101 family=2 entries=15 op=nft_register_chain pid=4244 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:08:53.516000 audit[4244]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffc51ed130 a2=0 a3=ffff86dd2fa8 items=0 ppid=4118 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:53.516000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:08:53.611000 audit[4242]: NETFILTER_CFG table=raw:102 family=2 entries=21 op=nft_register_chain pid=4242 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:08:53.611000 audit[4242]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=ffffef0912e0 a2=0 a3=ffffa53cefa8 items=0 ppid=4118 pid=4242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:53.611000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:08:53.641000 audit[4246]: NETFILTER_CFG table=filter:103 family=2 entries=39 op=nft_register_chain pid=4246 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:08:53.641000 audit[4246]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=18968 a0=3 a1=ffffef7d4f30 a2=0 a3=ffff9f2eefa8 items=0 ppid=4118 pid=4246 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:53.641000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:08:54.262670 systemd-networkd[1760]: vxlan.calico: Gained IPv6LL Dec 13 14:08:57.668641 env[1588]: time="2024-12-13T14:08:57.668542615Z" level=info msg="StopPodSandbox for \"2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1\"" Dec 13 14:08:57.750978 env[1588]: 2024-12-13 14:08:57.717 [INFO][4271] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1" Dec 13 14:08:57.750978 env[1588]: 2024-12-13 14:08:57.717 [INFO][4271] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1" iface="eth0" netns="/var/run/netns/cni-1232b553-0f38-de9d-4438-8db5c2688fd0" Dec 13 14:08:57.750978 env[1588]: 2024-12-13 14:08:57.718 [INFO][4271] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1" iface="eth0" netns="/var/run/netns/cni-1232b553-0f38-de9d-4438-8db5c2688fd0" Dec 13 14:08:57.750978 env[1588]: 2024-12-13 14:08:57.718 [INFO][4271] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1" iface="eth0" netns="/var/run/netns/cni-1232b553-0f38-de9d-4438-8db5c2688fd0" Dec 13 14:08:57.750978 env[1588]: 2024-12-13 14:08:57.718 [INFO][4271] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1" Dec 13 14:08:57.750978 env[1588]: 2024-12-13 14:08:57.718 [INFO][4271] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1" Dec 13 14:08:57.750978 env[1588]: 2024-12-13 14:08:57.737 [INFO][4277] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1" HandleID="k8s-pod-network.2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1" Workload="ci--3510.3.6--a--c740448bc5-k8s-calico--kube--controllers--559747f56b--6lsgx-eth0" Dec 13 14:08:57.750978 env[1588]: 2024-12-13 14:08:57.737 [INFO][4277] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:08:57.750978 env[1588]: 2024-12-13 14:08:57.737 [INFO][4277] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:08:57.750978 env[1588]: 2024-12-13 14:08:57.746 [WARNING][4277] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1" HandleID="k8s-pod-network.2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1" Workload="ci--3510.3.6--a--c740448bc5-k8s-calico--kube--controllers--559747f56b--6lsgx-eth0" Dec 13 14:08:57.750978 env[1588]: 2024-12-13 14:08:57.746 [INFO][4277] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1" HandleID="k8s-pod-network.2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1" Workload="ci--3510.3.6--a--c740448bc5-k8s-calico--kube--controllers--559747f56b--6lsgx-eth0" Dec 13 14:08:57.750978 env[1588]: 2024-12-13 14:08:57.747 [INFO][4277] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:08:57.750978 env[1588]: 2024-12-13 14:08:57.749 [INFO][4271] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1" Dec 13 14:08:57.755548 systemd[1]: run-netns-cni\x2d1232b553\x2d0f38\x2dde9d\x2d4438\x2d8db5c2688fd0.mount: Deactivated successfully. Dec 13 14:08:57.756557 env[1588]: time="2024-12-13T14:08:57.756484727Z" level=info msg="TearDown network for sandbox \"2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1\" successfully" Dec 13 14:08:57.756557 env[1588]: time="2024-12-13T14:08:57.756553526Z" level=info msg="StopPodSandbox for \"2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1\" returns successfully" Dec 13 14:08:57.757348 env[1588]: time="2024-12-13T14:08:57.757320467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-559747f56b-6lsgx,Uid:d5db41ac-60b1-4aef-8370-a52f5b42bc29,Namespace:calico-system,Attempt:1,}" Dec 13 14:08:57.919104 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:08:57.919615 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calied9e79fb634: link becomes ready Dec 13 14:08:57.921222 systemd-networkd[1760]: calied9e79fb634: Link UP Dec 13 14:08:57.921380 systemd-networkd[1760]: calied9e79fb634: Gained carrier Dec 13 14:08:57.944776 env[1588]: 2024-12-13 14:08:57.837 [INFO][4283] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.6--a--c740448bc5-k8s-calico--kube--controllers--559747f56b--6lsgx-eth0 calico-kube-controllers-559747f56b- calico-system d5db41ac-60b1-4aef-8370-a52f5b42bc29 818 0 2024-12-13 14:08:21 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:559747f56b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3510.3.6-a-c740448bc5 calico-kube-controllers-559747f56b-6lsgx eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calied9e79fb634 [] []}} ContainerID="c8d8641cf2f445fcb7dc432328bdbdfc13ba9b75db5ccf74187f29026290b4f7" Namespace="calico-system" Pod="calico-kube-controllers-559747f56b-6lsgx" WorkloadEndpoint="ci--3510.3.6--a--c740448bc5-k8s-calico--kube--controllers--559747f56b--6lsgx-" Dec 13 14:08:57.944776 env[1588]: 2024-12-13 14:08:57.837 [INFO][4283] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c8d8641cf2f445fcb7dc432328bdbdfc13ba9b75db5ccf74187f29026290b4f7" Namespace="calico-system" Pod="calico-kube-controllers-559747f56b-6lsgx" WorkloadEndpoint="ci--3510.3.6--a--c740448bc5-k8s-calico--kube--controllers--559747f56b--6lsgx-eth0" Dec 13 14:08:57.944776 env[1588]: 2024-12-13 14:08:57.861 [INFO][4294] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c8d8641cf2f445fcb7dc432328bdbdfc13ba9b75db5ccf74187f29026290b4f7" HandleID="k8s-pod-network.c8d8641cf2f445fcb7dc432328bdbdfc13ba9b75db5ccf74187f29026290b4f7" Workload="ci--3510.3.6--a--c740448bc5-k8s-calico--kube--controllers--559747f56b--6lsgx-eth0" Dec 13 14:08:57.944776 env[1588]: 2024-12-13 14:08:57.872 [INFO][4294] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c8d8641cf2f445fcb7dc432328bdbdfc13ba9b75db5ccf74187f29026290b4f7" HandleID="k8s-pod-network.c8d8641cf2f445fcb7dc432328bdbdfc13ba9b75db5ccf74187f29026290b4f7" Workload="ci--3510.3.6--a--c740448bc5-k8s-calico--kube--controllers--559747f56b--6lsgx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000316ad0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.6-a-c740448bc5", "pod":"calico-kube-controllers-559747f56b-6lsgx", "timestamp":"2024-12-13 14:08:57.861230945 +0000 UTC"}, Hostname:"ci-3510.3.6-a-c740448bc5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 14:08:57.944776 env[1588]: 2024-12-13 14:08:57.872 [INFO][4294] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:08:57.944776 env[1588]: 2024-12-13 14:08:57.873 [INFO][4294] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:08:57.944776 env[1588]: 2024-12-13 14:08:57.873 [INFO][4294] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.6-a-c740448bc5' Dec 13 14:08:57.944776 env[1588]: 2024-12-13 14:08:57.874 [INFO][4294] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c8d8641cf2f445fcb7dc432328bdbdfc13ba9b75db5ccf74187f29026290b4f7" host="ci-3510.3.6-a-c740448bc5" Dec 13 14:08:57.944776 env[1588]: 2024-12-13 14:08:57.877 [INFO][4294] ipam/ipam.go 372: Looking up existing affinities for host host="ci-3510.3.6-a-c740448bc5" Dec 13 14:08:57.944776 env[1588]: 2024-12-13 14:08:57.881 [INFO][4294] ipam/ipam.go 489: Trying affinity for 192.168.106.128/26 host="ci-3510.3.6-a-c740448bc5" Dec 13 14:08:57.944776 env[1588]: 2024-12-13 14:08:57.882 [INFO][4294] ipam/ipam.go 155: Attempting to load block cidr=192.168.106.128/26 host="ci-3510.3.6-a-c740448bc5" Dec 13 14:08:57.944776 env[1588]: 2024-12-13 14:08:57.884 [INFO][4294] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.106.128/26 host="ci-3510.3.6-a-c740448bc5" Dec 13 14:08:57.944776 env[1588]: 2024-12-13 14:08:57.884 [INFO][4294] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.106.128/26 handle="k8s-pod-network.c8d8641cf2f445fcb7dc432328bdbdfc13ba9b75db5ccf74187f29026290b4f7" host="ci-3510.3.6-a-c740448bc5" Dec 13 14:08:57.944776 env[1588]: 2024-12-13 14:08:57.885 [INFO][4294] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c8d8641cf2f445fcb7dc432328bdbdfc13ba9b75db5ccf74187f29026290b4f7 Dec 13 14:08:57.944776 env[1588]: 2024-12-13 14:08:57.890 [INFO][4294] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.106.128/26 handle="k8s-pod-network.c8d8641cf2f445fcb7dc432328bdbdfc13ba9b75db5ccf74187f29026290b4f7" host="ci-3510.3.6-a-c740448bc5" Dec 13 14:08:57.944776 env[1588]: 2024-12-13 14:08:57.902 [INFO][4294] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.106.129/26] block=192.168.106.128/26 handle="k8s-pod-network.c8d8641cf2f445fcb7dc432328bdbdfc13ba9b75db5ccf74187f29026290b4f7" host="ci-3510.3.6-a-c740448bc5" Dec 13 14:08:57.944776 env[1588]: 2024-12-13 14:08:57.902 [INFO][4294] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.106.129/26] handle="k8s-pod-network.c8d8641cf2f445fcb7dc432328bdbdfc13ba9b75db5ccf74187f29026290b4f7" host="ci-3510.3.6-a-c740448bc5" Dec 13 14:08:57.944776 env[1588]: 2024-12-13 14:08:57.903 [INFO][4294] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:08:57.944776 env[1588]: 2024-12-13 14:08:57.903 [INFO][4294] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.129/26] IPv6=[] ContainerID="c8d8641cf2f445fcb7dc432328bdbdfc13ba9b75db5ccf74187f29026290b4f7" HandleID="k8s-pod-network.c8d8641cf2f445fcb7dc432328bdbdfc13ba9b75db5ccf74187f29026290b4f7" Workload="ci--3510.3.6--a--c740448bc5-k8s-calico--kube--controllers--559747f56b--6lsgx-eth0" Dec 13 14:08:57.945386 env[1588]: 2024-12-13 14:08:57.904 [INFO][4283] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c8d8641cf2f445fcb7dc432328bdbdfc13ba9b75db5ccf74187f29026290b4f7" Namespace="calico-system" Pod="calico-kube-controllers-559747f56b-6lsgx" WorkloadEndpoint="ci--3510.3.6--a--c740448bc5-k8s-calico--kube--controllers--559747f56b--6lsgx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.6--a--c740448bc5-k8s-calico--kube--controllers--559747f56b--6lsgx-eth0", GenerateName:"calico-kube-controllers-559747f56b-", Namespace:"calico-system", SelfLink:"", UID:"d5db41ac-60b1-4aef-8370-a52f5b42bc29", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 8, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"559747f56b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.6-a-c740448bc5", ContainerID:"", Pod:"calico-kube-controllers-559747f56b-6lsgx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.106.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calied9e79fb634", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:08:57.945386 env[1588]: 2024-12-13 14:08:57.905 [INFO][4283] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.106.129/32] ContainerID="c8d8641cf2f445fcb7dc432328bdbdfc13ba9b75db5ccf74187f29026290b4f7" Namespace="calico-system" Pod="calico-kube-controllers-559747f56b-6lsgx" WorkloadEndpoint="ci--3510.3.6--a--c740448bc5-k8s-calico--kube--controllers--559747f56b--6lsgx-eth0" Dec 13 14:08:57.945386 env[1588]: 2024-12-13 14:08:57.905 [INFO][4283] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calied9e79fb634 ContainerID="c8d8641cf2f445fcb7dc432328bdbdfc13ba9b75db5ccf74187f29026290b4f7" Namespace="calico-system" Pod="calico-kube-controllers-559747f56b-6lsgx" WorkloadEndpoint="ci--3510.3.6--a--c740448bc5-k8s-calico--kube--controllers--559747f56b--6lsgx-eth0" Dec 13 14:08:57.945386 env[1588]: 2024-12-13 14:08:57.925 [INFO][4283] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c8d8641cf2f445fcb7dc432328bdbdfc13ba9b75db5ccf74187f29026290b4f7" Namespace="calico-system" Pod="calico-kube-controllers-559747f56b-6lsgx" WorkloadEndpoint="ci--3510.3.6--a--c740448bc5-k8s-calico--kube--controllers--559747f56b--6lsgx-eth0" Dec 13 14:08:57.945386 env[1588]: 2024-12-13 14:08:57.927 [INFO][4283] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c8d8641cf2f445fcb7dc432328bdbdfc13ba9b75db5ccf74187f29026290b4f7" Namespace="calico-system" Pod="calico-kube-controllers-559747f56b-6lsgx" WorkloadEndpoint="ci--3510.3.6--a--c740448bc5-k8s-calico--kube--controllers--559747f56b--6lsgx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.6--a--c740448bc5-k8s-calico--kube--controllers--559747f56b--6lsgx-eth0", GenerateName:"calico-kube-controllers-559747f56b-", Namespace:"calico-system", SelfLink:"", UID:"d5db41ac-60b1-4aef-8370-a52f5b42bc29", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 8, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"559747f56b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.6-a-c740448bc5", ContainerID:"c8d8641cf2f445fcb7dc432328bdbdfc13ba9b75db5ccf74187f29026290b4f7", Pod:"calico-kube-controllers-559747f56b-6lsgx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.106.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calied9e79fb634", MAC:"96:23:0c:11:74:e3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:08:57.945386 env[1588]: 2024-12-13 14:08:57.940 [INFO][4283] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c8d8641cf2f445fcb7dc432328bdbdfc13ba9b75db5ccf74187f29026290b4f7" Namespace="calico-system" Pod="calico-kube-controllers-559747f56b-6lsgx" WorkloadEndpoint="ci--3510.3.6--a--c740448bc5-k8s-calico--kube--controllers--559747f56b--6lsgx-eth0" Dec 13 14:08:57.953000 audit[4314]: NETFILTER_CFG table=filter:104 family=2 entries=34 op=nft_register_chain pid=4314 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:08:57.961199 kernel: kauditd_printk_skb: 476 callbacks suppressed Dec 13 14:08:57.961304 kernel: audit: type=1325 audit(1734098937.953:409): table=filter:104 family=2 entries=34 op=nft_register_chain pid=4314 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:08:57.953000 audit[4314]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19148 a0=3 a1=ffffc076c110 a2=0 a3=ffffadab5fa8 items=0 ppid=4118 pid=4314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:57.979338 env[1588]: time="2024-12-13T14:08:57.979158799Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:08:57.979338 env[1588]: time="2024-12-13T14:08:57.979195198Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:08:57.979338 env[1588]: time="2024-12-13T14:08:57.979204998Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:08:57.979672 env[1588]: time="2024-12-13T14:08:57.979611428Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c8d8641cf2f445fcb7dc432328bdbdfc13ba9b75db5ccf74187f29026290b4f7 pid=4323 runtime=io.containerd.runc.v2 Dec 13 14:08:58.001389 kernel: audit: type=1300 audit(1734098937.953:409): arch=c00000b7 syscall=211 success=yes exit=19148 a0=3 a1=ffffc076c110 a2=0 a3=ffffadab5fa8 items=0 ppid=4118 pid=4314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:57.953000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:08:58.016746 kernel: audit: type=1327 audit(1734098937.953:409): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:08:58.058050 env[1588]: time="2024-12-13T14:08:58.058001344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-559747f56b-6lsgx,Uid:d5db41ac-60b1-4aef-8370-a52f5b42bc29,Namespace:calico-system,Attempt:1,} returns sandbox id \"c8d8641cf2f445fcb7dc432328bdbdfc13ba9b75db5ccf74187f29026290b4f7\"" Dec 13 14:08:58.060635 env[1588]: time="2024-12-13T14:08:58.060039975Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Dec 13 14:08:58.671135 env[1588]: time="2024-12-13T14:08:58.671098772Z" level=info msg="StopPodSandbox for \"be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531\"" Dec 13 14:08:58.721565 env[1588]: time="2024-12-13T14:08:58.721529377Z" level=info msg="StopPodSandbox for \"2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1\"" Dec 13 14:08:58.768261 env[1588]: 2024-12-13 14:08:58.726 [INFO][4377] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531" Dec 13 14:08:58.768261 env[1588]: 2024-12-13 14:08:58.727 [INFO][4377] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531" iface="eth0" netns="/var/run/netns/cni-ab6e4ef4-ae56-3b10-a50c-8f5284fc97c5" Dec 13 14:08:58.768261 env[1588]: 2024-12-13 14:08:58.727 [INFO][4377] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531" iface="eth0" netns="/var/run/netns/cni-ab6e4ef4-ae56-3b10-a50c-8f5284fc97c5" Dec 13 14:08:58.768261 env[1588]: 2024-12-13 14:08:58.727 [INFO][4377] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531" iface="eth0" netns="/var/run/netns/cni-ab6e4ef4-ae56-3b10-a50c-8f5284fc97c5" Dec 13 14:08:58.768261 env[1588]: 2024-12-13 14:08:58.727 [INFO][4377] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531" Dec 13 14:08:58.768261 env[1588]: 2024-12-13 14:08:58.727 [INFO][4377] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531" Dec 13 14:08:58.768261 env[1588]: 2024-12-13 14:08:58.749 [INFO][4388] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531" HandleID="k8s-pod-network.be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531" Workload="ci--3510.3.6--a--c740448bc5-k8s-csi--node--driver--9jgbk-eth0" Dec 13 14:08:58.768261 env[1588]: 2024-12-13 14:08:58.749 [INFO][4388] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:08:58.768261 env[1588]: 2024-12-13 14:08:58.749 [INFO][4388] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:08:58.768261 env[1588]: 2024-12-13 14:08:58.758 [WARNING][4388] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531" HandleID="k8s-pod-network.be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531" Workload="ci--3510.3.6--a--c740448bc5-k8s-csi--node--driver--9jgbk-eth0" Dec 13 14:08:58.768261 env[1588]: 2024-12-13 14:08:58.759 [INFO][4388] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531" HandleID="k8s-pod-network.be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531" Workload="ci--3510.3.6--a--c740448bc5-k8s-csi--node--driver--9jgbk-eth0" Dec 13 14:08:58.768261 env[1588]: 2024-12-13 14:08:58.760 [INFO][4388] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:08:58.768261 env[1588]: 2024-12-13 14:08:58.762 [INFO][4377] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531" Dec 13 14:08:58.771811 systemd[1]: run-netns-cni\x2dab6e4ef4\x2dae56\x2d3b10\x2da50c\x2d8f5284fc97c5.mount: Deactivated successfully. Dec 13 14:08:58.772309 env[1588]: time="2024-12-13T14:08:58.772264935Z" level=info msg="TearDown network for sandbox \"be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531\" successfully" Dec 13 14:08:58.772402 env[1588]: time="2024-12-13T14:08:58.772385532Z" level=info msg="StopPodSandbox for \"be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531\" returns successfully" Dec 13 14:08:58.773444 env[1588]: time="2024-12-13T14:08:58.773412507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9jgbk,Uid:e35f131e-6a5b-4f9b-80ee-8f99f7186350,Namespace:calico-system,Attempt:1,}" Dec 13 14:08:58.811952 env[1588]: 2024-12-13 14:08:58.775 [WARNING][4400] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.6--a--c740448bc5-k8s-calico--kube--controllers--559747f56b--6lsgx-eth0", GenerateName:"calico-kube-controllers-559747f56b-", Namespace:"calico-system", SelfLink:"", UID:"d5db41ac-60b1-4aef-8370-a52f5b42bc29", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 8, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"559747f56b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.6-a-c740448bc5", ContainerID:"c8d8641cf2f445fcb7dc432328bdbdfc13ba9b75db5ccf74187f29026290b4f7", Pod:"calico-kube-controllers-559747f56b-6lsgx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.106.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calied9e79fb634", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:08:58.811952 env[1588]: 2024-12-13 14:08:58.775 [INFO][4400] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1" Dec 13 14:08:58.811952 env[1588]: 2024-12-13 14:08:58.775 [INFO][4400] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1" iface="eth0" netns="" Dec 13 14:08:58.811952 env[1588]: 2024-12-13 14:08:58.775 [INFO][4400] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1" Dec 13 14:08:58.811952 env[1588]: 2024-12-13 14:08:58.775 [INFO][4400] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1" Dec 13 14:08:58.811952 env[1588]: 2024-12-13 14:08:58.798 [INFO][4409] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1" HandleID="k8s-pod-network.2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1" Workload="ci--3510.3.6--a--c740448bc5-k8s-calico--kube--controllers--559747f56b--6lsgx-eth0" Dec 13 14:08:58.811952 env[1588]: 2024-12-13 14:08:58.798 [INFO][4409] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:08:58.811952 env[1588]: 2024-12-13 14:08:58.798 [INFO][4409] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:08:58.811952 env[1588]: 2024-12-13 14:08:58.807 [WARNING][4409] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1" HandleID="k8s-pod-network.2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1" Workload="ci--3510.3.6--a--c740448bc5-k8s-calico--kube--controllers--559747f56b--6lsgx-eth0" Dec 13 14:08:58.811952 env[1588]: 2024-12-13 14:08:58.807 [INFO][4409] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1" HandleID="k8s-pod-network.2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1" Workload="ci--3510.3.6--a--c740448bc5-k8s-calico--kube--controllers--559747f56b--6lsgx-eth0" Dec 13 14:08:58.811952 env[1588]: 2024-12-13 14:08:58.809 [INFO][4409] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:08:58.811952 env[1588]: 2024-12-13 14:08:58.810 [INFO][4400] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1" Dec 13 14:08:58.812488 env[1588]: time="2024-12-13T14:08:58.812456951Z" level=info msg="TearDown network for sandbox \"2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1\" successfully" Dec 13 14:08:58.812551 env[1588]: time="2024-12-13T14:08:58.812537109Z" level=info msg="StopPodSandbox for \"2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1\" returns successfully" Dec 13 14:08:58.813413 env[1588]: time="2024-12-13T14:08:58.813386408Z" level=info msg="RemovePodSandbox for \"2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1\"" Dec 13 14:08:58.813550 env[1588]: time="2024-12-13T14:08:58.813509205Z" level=info msg="Forcibly stopping sandbox \"2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1\"" Dec 13 14:08:58.971724 systemd-networkd[1760]: cali2aecb272a3b: Link UP Dec 13 14:08:58.984994 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:08:58.985172 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali2aecb272a3b: link becomes ready Dec 13 14:08:58.986470 systemd-networkd[1760]: cali2aecb272a3b: Gained carrier Dec 13 14:08:58.995395 env[1588]: 2024-12-13 14:08:58.888 [WARNING][4438] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.6--a--c740448bc5-k8s-calico--kube--controllers--559747f56b--6lsgx-eth0", GenerateName:"calico-kube-controllers-559747f56b-", Namespace:"calico-system", SelfLink:"", UID:"d5db41ac-60b1-4aef-8370-a52f5b42bc29", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 8, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"559747f56b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.6-a-c740448bc5", ContainerID:"c8d8641cf2f445fcb7dc432328bdbdfc13ba9b75db5ccf74187f29026290b4f7", Pod:"calico-kube-controllers-559747f56b-6lsgx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.106.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calied9e79fb634", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:08:58.995395 env[1588]: 2024-12-13 14:08:58.888 [INFO][4438] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1" Dec 13 14:08:58.995395 env[1588]: 2024-12-13 14:08:58.888 [INFO][4438] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1" iface="eth0" netns="" Dec 13 14:08:58.995395 env[1588]: 2024-12-13 14:08:58.888 [INFO][4438] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1" Dec 13 14:08:58.995395 env[1588]: 2024-12-13 14:08:58.888 [INFO][4438] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1" Dec 13 14:08:58.995395 env[1588]: 2024-12-13 14:08:58.926 [INFO][4452] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1" HandleID="k8s-pod-network.2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1" Workload="ci--3510.3.6--a--c740448bc5-k8s-calico--kube--controllers--559747f56b--6lsgx-eth0" Dec 13 14:08:58.995395 env[1588]: 2024-12-13 14:08:58.926 [INFO][4452] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:08:58.995395 env[1588]: 2024-12-13 14:08:58.944 [INFO][4452] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:08:58.995395 env[1588]: 2024-12-13 14:08:58.985 [WARNING][4452] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1" HandleID="k8s-pod-network.2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1" Workload="ci--3510.3.6--a--c740448bc5-k8s-calico--kube--controllers--559747f56b--6lsgx-eth0" Dec 13 14:08:58.995395 env[1588]: 2024-12-13 14:08:58.985 [INFO][4452] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1" HandleID="k8s-pod-network.2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1" Workload="ci--3510.3.6--a--c740448bc5-k8s-calico--kube--controllers--559747f56b--6lsgx-eth0" Dec 13 14:08:58.995395 env[1588]: 2024-12-13 14:08:58.988 [INFO][4452] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:08:58.995395 env[1588]: 2024-12-13 14:08:58.994 [INFO][4438] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1" Dec 13 14:08:58.995936 env[1588]: time="2024-12-13T14:08:58.995906659Z" level=info msg="TearDown network for sandbox \"2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1\" successfully" Dec 13 14:08:59.003705 env[1588]: 2024-12-13 14:08:58.868 [INFO][4425] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.6--a--c740448bc5-k8s-csi--node--driver--9jgbk-eth0 csi-node-driver- calico-system e35f131e-6a5b-4f9b-80ee-8f99f7186350 824 0 2024-12-13 14:08:21 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-3510.3.6-a-c740448bc5 csi-node-driver-9jgbk eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali2aecb272a3b [] []}} ContainerID="02e869268b2ffd945cc075802eee2df1dec85d757d151faf1bc0061f31b5de56" Namespace="calico-system" Pod="csi-node-driver-9jgbk" WorkloadEndpoint="ci--3510.3.6--a--c740448bc5-k8s-csi--node--driver--9jgbk-" Dec 13 14:08:59.003705 env[1588]: 2024-12-13 14:08:58.868 [INFO][4425] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="02e869268b2ffd945cc075802eee2df1dec85d757d151faf1bc0061f31b5de56" Namespace="calico-system" Pod="csi-node-driver-9jgbk" WorkloadEndpoint="ci--3510.3.6--a--c740448bc5-k8s-csi--node--driver--9jgbk-eth0" Dec 13 14:08:59.003705 env[1588]: 2024-12-13 14:08:58.900 [INFO][4447] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="02e869268b2ffd945cc075802eee2df1dec85d757d151faf1bc0061f31b5de56" HandleID="k8s-pod-network.02e869268b2ffd945cc075802eee2df1dec85d757d151faf1bc0061f31b5de56" Workload="ci--3510.3.6--a--c740448bc5-k8s-csi--node--driver--9jgbk-eth0" Dec 13 14:08:59.003705 env[1588]: 2024-12-13 14:08:58.910 [INFO][4447] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="02e869268b2ffd945cc075802eee2df1dec85d757d151faf1bc0061f31b5de56" HandleID="k8s-pod-network.02e869268b2ffd945cc075802eee2df1dec85d757d151faf1bc0061f31b5de56" Workload="ci--3510.3.6--a--c740448bc5-k8s-csi--node--driver--9jgbk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000310e10), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.6-a-c740448bc5", "pod":"csi-node-driver-9jgbk", "timestamp":"2024-12-13 14:08:58.900521115 +0000 UTC"}, Hostname:"ci-3510.3.6-a-c740448bc5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 14:08:59.003705 env[1588]: 2024-12-13 14:08:58.910 [INFO][4447] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:08:59.003705 env[1588]: 2024-12-13 14:08:58.910 [INFO][4447] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:08:59.003705 env[1588]: 2024-12-13 14:08:58.910 [INFO][4447] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.6-a-c740448bc5' Dec 13 14:08:59.003705 env[1588]: 2024-12-13 14:08:58.913 [INFO][4447] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.02e869268b2ffd945cc075802eee2df1dec85d757d151faf1bc0061f31b5de56" host="ci-3510.3.6-a-c740448bc5" Dec 13 14:08:59.003705 env[1588]: 2024-12-13 14:08:58.917 [INFO][4447] ipam/ipam.go 372: Looking up existing affinities for host host="ci-3510.3.6-a-c740448bc5" Dec 13 14:08:59.003705 env[1588]: 2024-12-13 14:08:58.923 [INFO][4447] ipam/ipam.go 489: Trying affinity for 192.168.106.128/26 host="ci-3510.3.6-a-c740448bc5" Dec 13 14:08:59.003705 env[1588]: 2024-12-13 14:08:58.925 [INFO][4447] ipam/ipam.go 155: Attempting to load block cidr=192.168.106.128/26 host="ci-3510.3.6-a-c740448bc5" Dec 13 14:08:59.003705 env[1588]: 2024-12-13 14:08:58.927 [INFO][4447] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.106.128/26 host="ci-3510.3.6-a-c740448bc5" Dec 13 14:08:59.003705 env[1588]: 2024-12-13 14:08:58.927 [INFO][4447] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.106.128/26 handle="k8s-pod-network.02e869268b2ffd945cc075802eee2df1dec85d757d151faf1bc0061f31b5de56" host="ci-3510.3.6-a-c740448bc5" Dec 13 14:08:59.003705 env[1588]: 2024-12-13 14:08:58.929 [INFO][4447] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.02e869268b2ffd945cc075802eee2df1dec85d757d151faf1bc0061f31b5de56 Dec 13 14:08:59.003705 env[1588]: 2024-12-13 14:08:58.934 [INFO][4447] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.106.128/26 handle="k8s-pod-network.02e869268b2ffd945cc075802eee2df1dec85d757d151faf1bc0061f31b5de56" host="ci-3510.3.6-a-c740448bc5" Dec 13 14:08:59.003705 env[1588]: 2024-12-13 14:08:58.943 [INFO][4447] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.106.130/26] block=192.168.106.128/26 handle="k8s-pod-network.02e869268b2ffd945cc075802eee2df1dec85d757d151faf1bc0061f31b5de56" host="ci-3510.3.6-a-c740448bc5" Dec 13 14:08:59.003705 env[1588]: 2024-12-13 14:08:58.943 [INFO][4447] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.106.130/26] handle="k8s-pod-network.02e869268b2ffd945cc075802eee2df1dec85d757d151faf1bc0061f31b5de56" host="ci-3510.3.6-a-c740448bc5" Dec 13 14:08:59.003705 env[1588]: 2024-12-13 14:08:58.943 [INFO][4447] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:08:59.003705 env[1588]: 2024-12-13 14:08:58.943 [INFO][4447] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.130/26] IPv6=[] ContainerID="02e869268b2ffd945cc075802eee2df1dec85d757d151faf1bc0061f31b5de56" HandleID="k8s-pod-network.02e869268b2ffd945cc075802eee2df1dec85d757d151faf1bc0061f31b5de56" Workload="ci--3510.3.6--a--c740448bc5-k8s-csi--node--driver--9jgbk-eth0" Dec 13 14:08:59.004232 env[1588]: 2024-12-13 14:08:58.956 [INFO][4425] cni-plugin/k8s.go 386: Populated endpoint ContainerID="02e869268b2ffd945cc075802eee2df1dec85d757d151faf1bc0061f31b5de56" Namespace="calico-system" Pod="csi-node-driver-9jgbk" WorkloadEndpoint="ci--3510.3.6--a--c740448bc5-k8s-csi--node--driver--9jgbk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.6--a--c740448bc5-k8s-csi--node--driver--9jgbk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e35f131e-6a5b-4f9b-80ee-8f99f7186350", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 8, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.6-a-c740448bc5", ContainerID:"", Pod:"csi-node-driver-9jgbk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.106.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2aecb272a3b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:08:59.004232 env[1588]: 2024-12-13 14:08:58.956 [INFO][4425] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.106.130/32] ContainerID="02e869268b2ffd945cc075802eee2df1dec85d757d151faf1bc0061f31b5de56" Namespace="calico-system" Pod="csi-node-driver-9jgbk" WorkloadEndpoint="ci--3510.3.6--a--c740448bc5-k8s-csi--node--driver--9jgbk-eth0" Dec 13 14:08:59.004232 env[1588]: 2024-12-13 14:08:58.956 [INFO][4425] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2aecb272a3b ContainerID="02e869268b2ffd945cc075802eee2df1dec85d757d151faf1bc0061f31b5de56" Namespace="calico-system" Pod="csi-node-driver-9jgbk" WorkloadEndpoint="ci--3510.3.6--a--c740448bc5-k8s-csi--node--driver--9jgbk-eth0" Dec 13 14:08:59.004232 env[1588]: 2024-12-13 14:08:58.987 [INFO][4425] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="02e869268b2ffd945cc075802eee2df1dec85d757d151faf1bc0061f31b5de56" Namespace="calico-system" Pod="csi-node-driver-9jgbk" WorkloadEndpoint="ci--3510.3.6--a--c740448bc5-k8s-csi--node--driver--9jgbk-eth0" Dec 13 14:08:59.004232 env[1588]: 2024-12-13 14:08:58.987 [INFO][4425] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="02e869268b2ffd945cc075802eee2df1dec85d757d151faf1bc0061f31b5de56" Namespace="calico-system" Pod="csi-node-driver-9jgbk" WorkloadEndpoint="ci--3510.3.6--a--c740448bc5-k8s-csi--node--driver--9jgbk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.6--a--c740448bc5-k8s-csi--node--driver--9jgbk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e35f131e-6a5b-4f9b-80ee-8f99f7186350", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 8, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.6-a-c740448bc5", ContainerID:"02e869268b2ffd945cc075802eee2df1dec85d757d151faf1bc0061f31b5de56", Pod:"csi-node-driver-9jgbk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.106.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2aecb272a3b", MAC:"e2:82:2a:fa:71:78", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:08:59.004232 env[1588]: 2024-12-13 14:08:58.998 [INFO][4425] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="02e869268b2ffd945cc075802eee2df1dec85d757d151faf1bc0061f31b5de56" Namespace="calico-system" Pod="csi-node-driver-9jgbk" WorkloadEndpoint="ci--3510.3.6--a--c740448bc5-k8s-csi--node--driver--9jgbk-eth0" Dec 13 14:08:59.008180 env[1588]: time="2024-12-13T14:08:59.008139240Z" level=info msg="RemovePodSandbox \"2d0426d0e89199c42ec6f9b39dc6bcb7ae49684e31828a2073caf07158227bb1\" returns successfully" Dec 13 14:08:59.011000 audit[4471]: NETFILTER_CFG table=filter:105 family=2 entries=34 op=nft_register_chain pid=4471 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:08:59.026956 env[1588]: time="2024-12-13T14:08:59.024750076Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:08:59.026956 env[1588]: time="2024-12-13T14:08:59.024820395Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:08:59.026956 env[1588]: time="2024-12-13T14:08:59.024833034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:08:59.026956 env[1588]: time="2024-12-13T14:08:59.025024950Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/02e869268b2ffd945cc075802eee2df1dec85d757d151faf1bc0061f31b5de56 pid=4480 runtime=io.containerd.runc.v2 Dec 13 14:08:59.011000 audit[4471]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=18640 a0=3 a1=fffffd1850a0 a2=0 a3=ffff9e697fa8 items=0 ppid=4118 pid=4471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:59.053054 kernel: audit: type=1325 audit(1734098939.011:410): table=filter:105 family=2 entries=34 op=nft_register_chain pid=4471 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:08:59.053210 kernel: audit: type=1300 audit(1734098939.011:410): arch=c00000b7 syscall=211 success=yes exit=18640 a0=3 a1=fffffd1850a0 a2=0 a3=ffff9e697fa8 items=0 ppid=4118 pid=4471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:08:59.011000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:08:59.069206 kernel: audit: type=1327 audit(1734098939.011:410): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:08:59.101275 env[1588]: time="2024-12-13T14:08:59.101227936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9jgbk,Uid:e35f131e-6a5b-4f9b-80ee-8f99f7186350,Namespace:calico-system,Attempt:1,} returns sandbox id \"02e869268b2ffd945cc075802eee2df1dec85d757d151faf1bc0061f31b5de56\"" Dec 13 14:08:59.670199 env[1588]: time="2024-12-13T14:08:59.670145455Z" level=info msg="StopPodSandbox for \"ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0\"" Dec 13 14:08:59.787318 env[1588]: 2024-12-13 14:08:59.741 [INFO][4529] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0" Dec 13 14:08:59.787318 env[1588]: 2024-12-13 14:08:59.741 [INFO][4529] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0" iface="eth0" netns="/var/run/netns/cni-9392c779-1e81-d720-79cf-605083cafc68" Dec 13 14:08:59.787318 env[1588]: 2024-12-13 14:08:59.741 [INFO][4529] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0" iface="eth0" netns="/var/run/netns/cni-9392c779-1e81-d720-79cf-605083cafc68" Dec 13 14:08:59.787318 env[1588]: 2024-12-13 14:08:59.742 [INFO][4529] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0" iface="eth0" netns="/var/run/netns/cni-9392c779-1e81-d720-79cf-605083cafc68" Dec 13 14:08:59.787318 env[1588]: 2024-12-13 14:08:59.742 [INFO][4529] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0" Dec 13 14:08:59.787318 env[1588]: 2024-12-13 14:08:59.742 [INFO][4529] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0" Dec 13 14:08:59.787318 env[1588]: 2024-12-13 14:08:59.776 [INFO][4535] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0" HandleID="k8s-pod-network.ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0" Workload="ci--3510.3.6--a--c740448bc5-k8s-calico--apiserver--6777fb8766--cqmhg-eth0" Dec 13 14:08:59.787318 env[1588]: 2024-12-13 14:08:59.776 [INFO][4535] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:08:59.787318 env[1588]: 2024-12-13 14:08:59.776 [INFO][4535] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:08:59.787318 env[1588]: 2024-12-13 14:08:59.784 [WARNING][4535] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0" HandleID="k8s-pod-network.ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0" Workload="ci--3510.3.6--a--c740448bc5-k8s-calico--apiserver--6777fb8766--cqmhg-eth0" Dec 13 14:08:59.787318 env[1588]: 2024-12-13 14:08:59.784 [INFO][4535] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0" HandleID="k8s-pod-network.ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0" Workload="ci--3510.3.6--a--c740448bc5-k8s-calico--apiserver--6777fb8766--cqmhg-eth0" Dec 13 14:08:59.787318 env[1588]: 2024-12-13 14:08:59.785 [INFO][4535] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:08:59.787318 env[1588]: 2024-12-13 14:08:59.786 [INFO][4529] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0" Dec 13 14:08:59.789759 systemd[1]: run-netns-cni\x2d9392c779\x2d1e81\x2dd720\x2d79cf\x2d605083cafc68.mount: Deactivated successfully. Dec 13 14:08:59.791270 env[1588]: time="2024-12-13T14:08:59.791224389Z" level=info msg="TearDown network for sandbox \"ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0\" successfully" Dec 13 14:08:59.791270 env[1588]: time="2024-12-13T14:08:59.791266868Z" level=info msg="StopPodSandbox for \"ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0\" returns successfully" Dec 13 14:08:59.791976 env[1588]: time="2024-12-13T14:08:59.791949332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6777fb8766-cqmhg,Uid:26820ebc-4ace-4cbe-bb2e-a9e912553e07,Namespace:calico-apiserver,Attempt:1,}" Dec 13 14:08:59.894229 systemd-networkd[1760]: calied9e79fb634: Gained IPv6LL Dec 13 14:08:59.977421 env[1588]: time="2024-12-13T14:08:59.977376780Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:08:59.980882 systemd-networkd[1760]: cali1c7c636c887: Link UP Dec 13 14:08:59.983364 env[1588]: time="2024-12-13T14:08:59.983331115Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:08:59.987645 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:08:59.987812 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali1c7c636c887: link becomes ready Dec 13 14:08:59.989785 env[1588]: time="2024-12-13T14:08:59.989744199Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:08:59.994623 systemd-networkd[1760]: cali1c7c636c887: Gained carrier Dec 13 14:08:59.994912 env[1588]: time="2024-12-13T14:08:59.994885474Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:08:59.997211 env[1588]: time="2024-12-13T14:08:59.996385238Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Dec 13 14:09:00.012222 env[1588]: time="2024-12-13T14:09:00.012186175Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 14:09:00.026000 audit[4565]: NETFILTER_CFG table=filter:106 family=2 entries=48 op=nft_register_chain pid=4565 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:09:00.029086 env[1588]: 2024-12-13 14:08:59.882 [INFO][4542] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.6--a--c740448bc5-k8s-calico--apiserver--6777fb8766--cqmhg-eth0 calico-apiserver-6777fb8766- calico-apiserver 26820ebc-4ace-4cbe-bb2e-a9e912553e07 833 0 2024-12-13 14:08:20 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6777fb8766 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510.3.6-a-c740448bc5 calico-apiserver-6777fb8766-cqmhg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1c7c636c887 [] []}} ContainerID="1e04b641ab40b87fb6d99b7e7ba210791490cc8b19a2289f449558b856691a51" Namespace="calico-apiserver" Pod="calico-apiserver-6777fb8766-cqmhg" WorkloadEndpoint="ci--3510.3.6--a--c740448bc5-k8s-calico--apiserver--6777fb8766--cqmhg-" Dec 13 14:09:00.029086 env[1588]: 2024-12-13 14:08:59.882 [INFO][4542] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1e04b641ab40b87fb6d99b7e7ba210791490cc8b19a2289f449558b856691a51" Namespace="calico-apiserver" Pod="calico-apiserver-6777fb8766-cqmhg" WorkloadEndpoint="ci--3510.3.6--a--c740448bc5-k8s-calico--apiserver--6777fb8766--cqmhg-eth0" Dec 13 14:09:00.029086 env[1588]: 2024-12-13 14:08:59.926 [INFO][4553] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1e04b641ab40b87fb6d99b7e7ba210791490cc8b19a2289f449558b856691a51" HandleID="k8s-pod-network.1e04b641ab40b87fb6d99b7e7ba210791490cc8b19a2289f449558b856691a51" Workload="ci--3510.3.6--a--c740448bc5-k8s-calico--apiserver--6777fb8766--cqmhg-eth0" Dec 13 14:09:00.029086 env[1588]: 2024-12-13 14:08:59.936 [INFO][4553] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1e04b641ab40b87fb6d99b7e7ba210791490cc8b19a2289f449558b856691a51" HandleID="k8s-pod-network.1e04b641ab40b87fb6d99b7e7ba210791490cc8b19a2289f449558b856691a51" Workload="ci--3510.3.6--a--c740448bc5-k8s-calico--apiserver--6777fb8766--cqmhg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000304b70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510.3.6-a-c740448bc5", "pod":"calico-apiserver-6777fb8766-cqmhg", "timestamp":"2024-12-13 14:08:59.925977591 +0000 UTC"}, Hostname:"ci-3510.3.6-a-c740448bc5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 14:09:00.029086 env[1588]: 2024-12-13 14:08:59.936 [INFO][4553] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:09:00.029086 env[1588]: 2024-12-13 14:08:59.936 [INFO][4553] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:09:00.029086 env[1588]: 2024-12-13 14:08:59.936 [INFO][4553] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.6-a-c740448bc5' Dec 13 14:09:00.029086 env[1588]: 2024-12-13 14:08:59.938 [INFO][4553] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1e04b641ab40b87fb6d99b7e7ba210791490cc8b19a2289f449558b856691a51" host="ci-3510.3.6-a-c740448bc5" Dec 13 14:09:00.029086 env[1588]: 2024-12-13 14:08:59.942 [INFO][4553] ipam/ipam.go 372: Looking up existing affinities for host host="ci-3510.3.6-a-c740448bc5" Dec 13 14:09:00.029086 env[1588]: 2024-12-13 14:08:59.949 [INFO][4553] ipam/ipam.go 489: Trying affinity for 192.168.106.128/26 host="ci-3510.3.6-a-c740448bc5" Dec 13 14:09:00.029086 env[1588]: 2024-12-13 14:08:59.951 [INFO][4553] ipam/ipam.go 155: Attempting to load block cidr=192.168.106.128/26 host="ci-3510.3.6-a-c740448bc5" Dec 13 14:09:00.029086 env[1588]: 2024-12-13 14:08:59.955 [INFO][4553] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.106.128/26 host="ci-3510.3.6-a-c740448bc5" Dec 13 14:09:00.029086 env[1588]: 2024-12-13 14:08:59.955 [INFO][4553] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.106.128/26 handle="k8s-pod-network.1e04b641ab40b87fb6d99b7e7ba210791490cc8b19a2289f449558b856691a51" host="ci-3510.3.6-a-c740448bc5" Dec 13 14:09:00.029086 env[1588]: 2024-12-13 14:08:59.956 [INFO][4553] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1e04b641ab40b87fb6d99b7e7ba210791490cc8b19a2289f449558b856691a51 Dec 13 14:09:00.029086 env[1588]: 2024-12-13 14:08:59.961 [INFO][4553] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.106.128/26 handle="k8s-pod-network.1e04b641ab40b87fb6d99b7e7ba210791490cc8b19a2289f449558b856691a51" host="ci-3510.3.6-a-c740448bc5" Dec 13 14:09:00.029086 env[1588]: 2024-12-13 14:08:59.974 [INFO][4553] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.106.131/26] block=192.168.106.128/26 handle="k8s-pod-network.1e04b641ab40b87fb6d99b7e7ba210791490cc8b19a2289f449558b856691a51" host="ci-3510.3.6-a-c740448bc5" Dec 13 14:09:00.029086 env[1588]: 2024-12-13 14:08:59.974 [INFO][4553] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.106.131/26] handle="k8s-pod-network.1e04b641ab40b87fb6d99b7e7ba210791490cc8b19a2289f449558b856691a51" host="ci-3510.3.6-a-c740448bc5" Dec 13 14:09:00.029086 env[1588]: 2024-12-13 14:08:59.974 [INFO][4553] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:09:00.029086 env[1588]: 2024-12-13 14:08:59.974 [INFO][4553] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.131/26] IPv6=[] ContainerID="1e04b641ab40b87fb6d99b7e7ba210791490cc8b19a2289f449558b856691a51" HandleID="k8s-pod-network.1e04b641ab40b87fb6d99b7e7ba210791490cc8b19a2289f449558b856691a51" Workload="ci--3510.3.6--a--c740448bc5-k8s-calico--apiserver--6777fb8766--cqmhg-eth0" Dec 13 14:09:00.029779 env[1588]: 2024-12-13 14:08:59.976 [INFO][4542] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1e04b641ab40b87fb6d99b7e7ba210791490cc8b19a2289f449558b856691a51" Namespace="calico-apiserver" Pod="calico-apiserver-6777fb8766-cqmhg" WorkloadEndpoint="ci--3510.3.6--a--c740448bc5-k8s-calico--apiserver--6777fb8766--cqmhg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.6--a--c740448bc5-k8s-calico--apiserver--6777fb8766--cqmhg-eth0", GenerateName:"calico-apiserver-6777fb8766-", Namespace:"calico-apiserver", SelfLink:"", UID:"26820ebc-4ace-4cbe-bb2e-a9e912553e07", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 8, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6777fb8766", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.6-a-c740448bc5", ContainerID:"", Pod:"calico-apiserver-6777fb8766-cqmhg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1c7c636c887", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:09:00.029779 env[1588]: 2024-12-13 14:08:59.976 [INFO][4542] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.106.131/32] ContainerID="1e04b641ab40b87fb6d99b7e7ba210791490cc8b19a2289f449558b856691a51" Namespace="calico-apiserver" Pod="calico-apiserver-6777fb8766-cqmhg" WorkloadEndpoint="ci--3510.3.6--a--c740448bc5-k8s-calico--apiserver--6777fb8766--cqmhg-eth0" Dec 13 14:09:00.029779 env[1588]: 2024-12-13 14:08:59.976 [INFO][4542] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1c7c636c887 ContainerID="1e04b641ab40b87fb6d99b7e7ba210791490cc8b19a2289f449558b856691a51" Namespace="calico-apiserver" Pod="calico-apiserver-6777fb8766-cqmhg" WorkloadEndpoint="ci--3510.3.6--a--c740448bc5-k8s-calico--apiserver--6777fb8766--cqmhg-eth0" Dec 13 14:09:00.029779 env[1588]: 2024-12-13 14:08:59.995 [INFO][4542] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1e04b641ab40b87fb6d99b7e7ba210791490cc8b19a2289f449558b856691a51" Namespace="calico-apiserver" Pod="calico-apiserver-6777fb8766-cqmhg" WorkloadEndpoint="ci--3510.3.6--a--c740448bc5-k8s-calico--apiserver--6777fb8766--cqmhg-eth0" Dec 13 14:09:00.029779 env[1588]: 2024-12-13 14:08:59.996 [INFO][4542] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1e04b641ab40b87fb6d99b7e7ba210791490cc8b19a2289f449558b856691a51" Namespace="calico-apiserver" Pod="calico-apiserver-6777fb8766-cqmhg" WorkloadEndpoint="ci--3510.3.6--a--c740448bc5-k8s-calico--apiserver--6777fb8766--cqmhg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.6--a--c740448bc5-k8s-calico--apiserver--6777fb8766--cqmhg-eth0", GenerateName:"calico-apiserver-6777fb8766-", Namespace:"calico-apiserver", SelfLink:"", UID:"26820ebc-4ace-4cbe-bb2e-a9e912553e07", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 8, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6777fb8766", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.6-a-c740448bc5", ContainerID:"1e04b641ab40b87fb6d99b7e7ba210791490cc8b19a2289f449558b856691a51", Pod:"calico-apiserver-6777fb8766-cqmhg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1c7c636c887", MAC:"42:15:01:b0:8b:d6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:09:00.029779 env[1588]: 2024-12-13 14:09:00.015 [INFO][4542] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1e04b641ab40b87fb6d99b7e7ba210791490cc8b19a2289f449558b856691a51" Namespace="calico-apiserver" Pod="calico-apiserver-6777fb8766-cqmhg" WorkloadEndpoint="ci--3510.3.6--a--c740448bc5-k8s-calico--apiserver--6777fb8766--cqmhg-eth0" Dec 13 14:09:00.044521 env[1588]: time="2024-12-13T14:09:00.044476794Z" level=info msg="CreateContainer within sandbox \"c8d8641cf2f445fcb7dc432328bdbdfc13ba9b75db5ccf74187f29026290b4f7\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 13 14:09:00.026000 audit[4565]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=25868 a0=3 a1=ffffdb06bef0 a2=0 a3=ffffa0be0fa8 items=0 ppid=4118 pid=4565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:09:00.072532 kernel: audit: type=1325 audit(1734098940.026:411): table=filter:106 family=2 entries=48 op=nft_register_chain pid=4565 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:09:00.072680 kernel: audit: type=1300 audit(1734098940.026:411): arch=c00000b7 syscall=211 success=yes exit=25868 a0=3 a1=ffffdb06bef0 a2=0 a3=ffffa0be0fa8 items=0 ppid=4118 pid=4565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:09:00.026000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:09:00.088468 kernel: audit: type=1327 audit(1734098940.026:411): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:09:00.109138 env[1588]: time="2024-12-13T14:09:00.109056593Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:09:00.109138 env[1588]: time="2024-12-13T14:09:00.109110872Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:09:00.109349 env[1588]: time="2024-12-13T14:09:00.109121152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:09:00.110285 env[1588]: time="2024-12-13T14:09:00.109561261Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1e04b641ab40b87fb6d99b7e7ba210791490cc8b19a2289f449558b856691a51 pid=4583 runtime=io.containerd.runc.v2 Dec 13 14:09:00.126125 env[1588]: time="2024-12-13T14:09:00.126078942Z" level=info msg="CreateContainer within sandbox \"c8d8641cf2f445fcb7dc432328bdbdfc13ba9b75db5ccf74187f29026290b4f7\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"5a62e217dcb2167ca05d154847bc9b1073d7a71776d0e69e1a62b342157abfc6\"" Dec 13 14:09:00.127756 env[1588]: time="2024-12-13T14:09:00.126752325Z" level=info msg="StartContainer for \"5a62e217dcb2167ca05d154847bc9b1073d7a71776d0e69e1a62b342157abfc6\"" Dec 13 14:09:00.167068 env[1588]: time="2024-12-13T14:09:00.167022112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6777fb8766-cqmhg,Uid:26820ebc-4ace-4cbe-bb2e-a9e912553e07,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"1e04b641ab40b87fb6d99b7e7ba210791490cc8b19a2289f449558b856691a51\"" Dec 13 14:09:00.596844 env[1588]: time="2024-12-13T14:09:00.596790402Z" level=info msg="StartContainer for \"5a62e217dcb2167ca05d154847bc9b1073d7a71776d0e69e1a62b342157abfc6\" returns successfully" Dec 13 14:09:00.670844 env[1588]: time="2024-12-13T14:09:00.670806452Z" level=info msg="StopPodSandbox for \"ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d\"" Dec 13 14:09:00.672247 env[1588]: time="2024-12-13T14:09:00.671135485Z" level=info msg="StopPodSandbox for \"53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a\"" Dec 13 14:09:00.774812 env[1588]: 2024-12-13 14:09:00.724 [INFO][4679] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d" Dec 13 14:09:00.774812 env[1588]: 2024-12-13 14:09:00.727 [INFO][4679] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d" iface="eth0" netns="/var/run/netns/cni-6e5e2506-efa2-7030-5462-477555358a4b" Dec 13 14:09:00.774812 env[1588]: 2024-12-13 14:09:00.727 [INFO][4679] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d" iface="eth0" netns="/var/run/netns/cni-6e5e2506-efa2-7030-5462-477555358a4b" Dec 13 14:09:00.774812 env[1588]: 2024-12-13 14:09:00.727 [INFO][4679] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d" iface="eth0" netns="/var/run/netns/cni-6e5e2506-efa2-7030-5462-477555358a4b" Dec 13 14:09:00.774812 env[1588]: 2024-12-13 14:09:00.727 [INFO][4679] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d" Dec 13 14:09:00.774812 env[1588]: 2024-12-13 14:09:00.727 [INFO][4679] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d" Dec 13 14:09:00.774812 env[1588]: 2024-12-13 14:09:00.762 [INFO][4691] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d" HandleID="k8s-pod-network.ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d" Workload="ci--3510.3.6--a--c740448bc5-k8s-calico--apiserver--6777fb8766--ckq89-eth0" Dec 13 14:09:00.774812 env[1588]: 2024-12-13 14:09:00.762 [INFO][4691] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:09:00.774812 env[1588]: 2024-12-13 14:09:00.762 [INFO][4691] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:09:00.774812 env[1588]: 2024-12-13 14:09:00.771 [WARNING][4691] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d" HandleID="k8s-pod-network.ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d" Workload="ci--3510.3.6--a--c740448bc5-k8s-calico--apiserver--6777fb8766--ckq89-eth0" Dec 13 14:09:00.774812 env[1588]: 2024-12-13 14:09:00.771 [INFO][4691] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d" HandleID="k8s-pod-network.ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d" Workload="ci--3510.3.6--a--c740448bc5-k8s-calico--apiserver--6777fb8766--ckq89-eth0" Dec 13 14:09:00.774812 env[1588]: 2024-12-13 14:09:00.772 [INFO][4691] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:09:00.774812 env[1588]: 2024-12-13 14:09:00.773 [INFO][4679] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d" Dec 13 14:09:00.775421 env[1588]: time="2024-12-13T14:09:00.775381004Z" level=info msg="TearDown network for sandbox \"ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d\" successfully" Dec 13 14:09:00.775509 env[1588]: time="2024-12-13T14:09:00.775492282Z" level=info msg="StopPodSandbox for \"ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d\" returns successfully" Dec 13 14:09:00.776340 env[1588]: time="2024-12-13T14:09:00.776306422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6777fb8766-ckq89,Uid:4c011aa9-d666-4206-8289-8d5531610d0f,Namespace:calico-apiserver,Attempt:1,}" Dec 13 14:09:00.795579 systemd[1]: run-netns-cni\x2d6e5e2506\x2defa2\x2d7030\x2d5462\x2d477555358a4b.mount: Deactivated successfully. Dec 13 14:09:00.806715 env[1588]: 2024-12-13 14:09:00.746 [INFO][4680] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a" Dec 13 14:09:00.806715 env[1588]: 2024-12-13 14:09:00.746 [INFO][4680] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a" iface="eth0" netns="/var/run/netns/cni-6fbc9c95-013d-43ae-ad8b-5e12ac537558" Dec 13 14:09:00.806715 env[1588]: 2024-12-13 14:09:00.746 [INFO][4680] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a" iface="eth0" netns="/var/run/netns/cni-6fbc9c95-013d-43ae-ad8b-5e12ac537558" Dec 13 14:09:00.806715 env[1588]: 2024-12-13 14:09:00.746 [INFO][4680] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a" iface="eth0" netns="/var/run/netns/cni-6fbc9c95-013d-43ae-ad8b-5e12ac537558" Dec 13 14:09:00.806715 env[1588]: 2024-12-13 14:09:00.746 [INFO][4680] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a" Dec 13 14:09:00.806715 env[1588]: 2024-12-13 14:09:00.746 [INFO][4680] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a" Dec 13 14:09:00.806715 env[1588]: 2024-12-13 14:09:00.778 [INFO][4695] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a" HandleID="k8s-pod-network.53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a" Workload="ci--3510.3.6--a--c740448bc5-k8s-coredns--76f75df574--68bh6-eth0" Dec 13 14:09:00.806715 env[1588]: 2024-12-13 14:09:00.778 [INFO][4695] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:09:00.806715 env[1588]: 2024-12-13 14:09:00.778 [INFO][4695] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:09:00.806715 env[1588]: 2024-12-13 14:09:00.795 [WARNING][4695] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a" HandleID="k8s-pod-network.53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a" Workload="ci--3510.3.6--a--c740448bc5-k8s-coredns--76f75df574--68bh6-eth0" Dec 13 14:09:00.806715 env[1588]: 2024-12-13 14:09:00.795 [INFO][4695] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a" HandleID="k8s-pod-network.53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a" Workload="ci--3510.3.6--a--c740448bc5-k8s-coredns--76f75df574--68bh6-eth0" Dec 13 14:09:00.806715 env[1588]: 2024-12-13 14:09:00.804 [INFO][4695] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:09:00.806715 env[1588]: 2024-12-13 14:09:00.805 [INFO][4680] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a" Dec 13 14:09:00.810722 env[1588]: time="2024-12-13T14:09:00.809935889Z" level=info msg="TearDown network for sandbox \"53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a\" successfully" Dec 13 14:09:00.810722 env[1588]: time="2024-12-13T14:09:00.810542154Z" level=info msg="StopPodSandbox for \"53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a\" returns successfully" Dec 13 14:09:00.809332 systemd[1]: run-netns-cni\x2d6fbc9c95\x2d013d\x2d43ae\x2dad8b\x2d5e12ac537558.mount: Deactivated successfully. Dec 13 14:09:00.811387 env[1588]: time="2024-12-13T14:09:00.811351615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-68bh6,Uid:63d4a6f1-19ab-4534-9a5d-579c3598a6da,Namespace:kube-system,Attempt:1,}" Dec 13 14:09:00.948283 kubelet[2790]: I1213 14:09:00.947503 2790 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-559747f56b-6lsgx" podStartSLOduration=38.005325442 podStartE2EDuration="39.947455404s" podCreationTimestamp="2024-12-13 14:08:21 +0000 UTC" firstStartedPulling="2024-12-13 14:08:58.05941927 +0000 UTC m=+59.528555654" lastFinishedPulling="2024-12-13 14:09:00.001549232 +0000 UTC m=+61.470685616" observedRunningTime="2024-12-13 14:09:00.943617257 +0000 UTC m=+62.412753641" watchObservedRunningTime="2024-12-13 14:09:00.947455404 +0000 UTC m=+62.416591788" Dec 13 14:09:01.034793 systemd-networkd[1760]: cali6cf45bcd586: Link UP Dec 13 14:09:01.048317 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:09:01.054443 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali6cf45bcd586: link becomes ready Dec 13 14:09:01.052473 systemd-networkd[1760]: cali6cf45bcd586: Gained carrier Dec 13 14:09:01.054806 systemd-networkd[1760]: cali2aecb272a3b: Gained IPv6LL Dec 13 14:09:01.076592 env[1588]: 2024-12-13 14:09:00.887 [INFO][4703] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.6--a--c740448bc5-k8s-calico--apiserver--6777fb8766--ckq89-eth0 calico-apiserver-6777fb8766- calico-apiserver 4c011aa9-d666-4206-8289-8d5531610d0f 847 0 2024-12-13 14:08:20 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6777fb8766 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510.3.6-a-c740448bc5 calico-apiserver-6777fb8766-ckq89 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6cf45bcd586 [] []}} ContainerID="c29ab4bf366cba717ff14b2f78fcb841eef3a3efeaf3c0ed8f38e89b133c7d7d" Namespace="calico-apiserver" Pod="calico-apiserver-6777fb8766-ckq89" WorkloadEndpoint="ci--3510.3.6--a--c740448bc5-k8s-calico--apiserver--6777fb8766--ckq89-" Dec 13 14:09:01.076592 env[1588]: 2024-12-13 14:09:00.887 [INFO][4703] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c29ab4bf366cba717ff14b2f78fcb841eef3a3efeaf3c0ed8f38e89b133c7d7d" Namespace="calico-apiserver" Pod="calico-apiserver-6777fb8766-ckq89" WorkloadEndpoint="ci--3510.3.6--a--c740448bc5-k8s-calico--apiserver--6777fb8766--ckq89-eth0" Dec 13 14:09:01.076592 env[1588]: 2024-12-13 14:09:00.951 [INFO][4727] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c29ab4bf366cba717ff14b2f78fcb841eef3a3efeaf3c0ed8f38e89b133c7d7d" HandleID="k8s-pod-network.c29ab4bf366cba717ff14b2f78fcb841eef3a3efeaf3c0ed8f38e89b133c7d7d" Workload="ci--3510.3.6--a--c740448bc5-k8s-calico--apiserver--6777fb8766--ckq89-eth0" Dec 13 14:09:01.076592 env[1588]: 2024-12-13 14:09:00.972 [INFO][4727] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c29ab4bf366cba717ff14b2f78fcb841eef3a3efeaf3c0ed8f38e89b133c7d7d" HandleID="k8s-pod-network.c29ab4bf366cba717ff14b2f78fcb841eef3a3efeaf3c0ed8f38e89b133c7d7d" Workload="ci--3510.3.6--a--c740448bc5-k8s-calico--apiserver--6777fb8766--ckq89-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400029a7a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510.3.6-a-c740448bc5", "pod":"calico-apiserver-6777fb8766-ckq89", "timestamp":"2024-12-13 14:09:00.951523226 +0000 UTC"}, Hostname:"ci-3510.3.6-a-c740448bc5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 14:09:01.076592 env[1588]: 2024-12-13 14:09:00.972 [INFO][4727] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:09:01.076592 env[1588]: 2024-12-13 14:09:00.972 [INFO][4727] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:09:01.076592 env[1588]: 2024-12-13 14:09:00.972 [INFO][4727] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.6-a-c740448bc5' Dec 13 14:09:01.076592 env[1588]: 2024-12-13 14:09:00.974 [INFO][4727] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c29ab4bf366cba717ff14b2f78fcb841eef3a3efeaf3c0ed8f38e89b133c7d7d" host="ci-3510.3.6-a-c740448bc5" Dec 13 14:09:01.076592 env[1588]: 2024-12-13 14:09:00.977 [INFO][4727] ipam/ipam.go 372: Looking up existing affinities for host host="ci-3510.3.6-a-c740448bc5" Dec 13 14:09:01.076592 env[1588]: 2024-12-13 14:09:00.981 [INFO][4727] ipam/ipam.go 489: Trying affinity for 192.168.106.128/26 host="ci-3510.3.6-a-c740448bc5" Dec 13 14:09:01.076592 env[1588]: 2024-12-13 14:09:00.982 [INFO][4727] ipam/ipam.go 155: Attempting to load block cidr=192.168.106.128/26 host="ci-3510.3.6-a-c740448bc5" Dec 13 14:09:01.076592 env[1588]: 2024-12-13 14:09:00.990 [INFO][4727] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.106.128/26 host="ci-3510.3.6-a-c740448bc5" Dec 13 14:09:01.076592 env[1588]: 2024-12-13 14:09:00.990 [INFO][4727] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.106.128/26 handle="k8s-pod-network.c29ab4bf366cba717ff14b2f78fcb841eef3a3efeaf3c0ed8f38e89b133c7d7d" host="ci-3510.3.6-a-c740448bc5" Dec 13 14:09:01.076592 env[1588]: 2024-12-13 14:09:00.997 [INFO][4727] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c29ab4bf366cba717ff14b2f78fcb841eef3a3efeaf3c0ed8f38e89b133c7d7d Dec 13 14:09:01.076592 env[1588]: 2024-12-13 14:09:01.012 [INFO][4727] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.106.128/26 handle="k8s-pod-network.c29ab4bf366cba717ff14b2f78fcb841eef3a3efeaf3c0ed8f38e89b133c7d7d" host="ci-3510.3.6-a-c740448bc5" Dec 13 14:09:01.076592 env[1588]: 2024-12-13 14:09:01.020 [INFO][4727] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.106.132/26] block=192.168.106.128/26 handle="k8s-pod-network.c29ab4bf366cba717ff14b2f78fcb841eef3a3efeaf3c0ed8f38e89b133c7d7d" host="ci-3510.3.6-a-c740448bc5" Dec 13 14:09:01.076592 env[1588]: 2024-12-13 14:09:01.020 [INFO][4727] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.106.132/26] handle="k8s-pod-network.c29ab4bf366cba717ff14b2f78fcb841eef3a3efeaf3c0ed8f38e89b133c7d7d" host="ci-3510.3.6-a-c740448bc5" Dec 13 14:09:01.076592 env[1588]: 2024-12-13 14:09:01.020 [INFO][4727] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:09:01.076592 env[1588]: 2024-12-13 14:09:01.020 [INFO][4727] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.132/26] IPv6=[] ContainerID="c29ab4bf366cba717ff14b2f78fcb841eef3a3efeaf3c0ed8f38e89b133c7d7d" HandleID="k8s-pod-network.c29ab4bf366cba717ff14b2f78fcb841eef3a3efeaf3c0ed8f38e89b133c7d7d" Workload="ci--3510.3.6--a--c740448bc5-k8s-calico--apiserver--6777fb8766--ckq89-eth0" Dec 13 14:09:01.077299 env[1588]: 2024-12-13 14:09:01.024 [INFO][4703] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c29ab4bf366cba717ff14b2f78fcb841eef3a3efeaf3c0ed8f38e89b133c7d7d" Namespace="calico-apiserver" Pod="calico-apiserver-6777fb8766-ckq89" WorkloadEndpoint="ci--3510.3.6--a--c740448bc5-k8s-calico--apiserver--6777fb8766--ckq89-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.6--a--c740448bc5-k8s-calico--apiserver--6777fb8766--ckq89-eth0", GenerateName:"calico-apiserver-6777fb8766-", Namespace:"calico-apiserver", SelfLink:"", UID:"4c011aa9-d666-4206-8289-8d5531610d0f", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 8, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6777fb8766", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.6-a-c740448bc5", ContainerID:"", Pod:"calico-apiserver-6777fb8766-ckq89", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6cf45bcd586", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:09:01.077299 env[1588]: 2024-12-13 14:09:01.024 [INFO][4703] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.106.132/32] ContainerID="c29ab4bf366cba717ff14b2f78fcb841eef3a3efeaf3c0ed8f38e89b133c7d7d" Namespace="calico-apiserver" Pod="calico-apiserver-6777fb8766-ckq89" WorkloadEndpoint="ci--3510.3.6--a--c740448bc5-k8s-calico--apiserver--6777fb8766--ckq89-eth0" Dec 13 14:09:01.077299 env[1588]: 2024-12-13 14:09:01.024 [INFO][4703] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6cf45bcd586 ContainerID="c29ab4bf366cba717ff14b2f78fcb841eef3a3efeaf3c0ed8f38e89b133c7d7d" Namespace="calico-apiserver" Pod="calico-apiserver-6777fb8766-ckq89" WorkloadEndpoint="ci--3510.3.6--a--c740448bc5-k8s-calico--apiserver--6777fb8766--ckq89-eth0" Dec 13 14:09:01.077299 env[1588]: 2024-12-13 14:09:01.035 [INFO][4703] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c29ab4bf366cba717ff14b2f78fcb841eef3a3efeaf3c0ed8f38e89b133c7d7d" Namespace="calico-apiserver" Pod="calico-apiserver-6777fb8766-ckq89" WorkloadEndpoint="ci--3510.3.6--a--c740448bc5-k8s-calico--apiserver--6777fb8766--ckq89-eth0" Dec 13 14:09:01.077299 env[1588]: 2024-12-13 14:09:01.054 [INFO][4703] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c29ab4bf366cba717ff14b2f78fcb841eef3a3efeaf3c0ed8f38e89b133c7d7d" Namespace="calico-apiserver" Pod="calico-apiserver-6777fb8766-ckq89" WorkloadEndpoint="ci--3510.3.6--a--c740448bc5-k8s-calico--apiserver--6777fb8766--ckq89-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.6--a--c740448bc5-k8s-calico--apiserver--6777fb8766--ckq89-eth0", GenerateName:"calico-apiserver-6777fb8766-", Namespace:"calico-apiserver", SelfLink:"", UID:"4c011aa9-d666-4206-8289-8d5531610d0f", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 8, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6777fb8766", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.6-a-c740448bc5", ContainerID:"c29ab4bf366cba717ff14b2f78fcb841eef3a3efeaf3c0ed8f38e89b133c7d7d", Pod:"calico-apiserver-6777fb8766-ckq89", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6cf45bcd586", MAC:"0a:bd:f6:6e:08:89", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:09:01.077299 env[1588]: 2024-12-13 14:09:01.074 [INFO][4703] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c29ab4bf366cba717ff14b2f78fcb841eef3a3efeaf3c0ed8f38e89b133c7d7d" Namespace="calico-apiserver" Pod="calico-apiserver-6777fb8766-ckq89" WorkloadEndpoint="ci--3510.3.6--a--c740448bc5-k8s-calico--apiserver--6777fb8766--ckq89-eth0" Dec 13 14:09:01.103000 audit[4777]: NETFILTER_CFG table=filter:107 family=2 entries=42 op=nft_register_chain pid=4777 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:09:01.119974 systemd-networkd[1760]: calic6a4145fb2d: Link UP Dec 13 14:09:01.131637 kernel: audit: type=1325 audit(1734098941.103:412): table=filter:107 family=2 entries=42 op=nft_register_chain pid=4777 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:09:01.131746 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calic6a4145fb2d: link becomes ready Dec 13 14:09:01.134139 systemd-networkd[1760]: calic6a4145fb2d: Gained carrier Dec 13 14:09:01.135958 env[1588]: time="2024-12-13T14:09:01.130283163Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:09:01.135958 env[1588]: time="2024-12-13T14:09:01.130319162Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:09:01.135958 env[1588]: time="2024-12-13T14:09:01.130329762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:09:01.135958 env[1588]: time="2024-12-13T14:09:01.130467399Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c29ab4bf366cba717ff14b2f78fcb841eef3a3efeaf3c0ed8f38e89b133c7d7d pid=4782 runtime=io.containerd.runc.v2 Dec 13 14:09:01.103000 audit[4777]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=22704 a0=3 a1=ffffde3725b0 a2=0 a3=ffff8cbd6fa8 items=0 ppid=4118 pid=4777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:09:01.103000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:09:01.140740 env[1588]: 2024-12-13 14:09:00.933 [INFO][4714] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.6--a--c740448bc5-k8s-coredns--76f75df574--68bh6-eth0 coredns-76f75df574- kube-system 63d4a6f1-19ab-4534-9a5d-579c3598a6da 848 0 2024-12-13 14:08:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510.3.6-a-c740448bc5 coredns-76f75df574-68bh6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic6a4145fb2d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="f784df9b9ef53ee386ea2b2f265fa25381ea37325ccad5ff322f7edb0e424c45" Namespace="kube-system" Pod="coredns-76f75df574-68bh6" WorkloadEndpoint="ci--3510.3.6--a--c740448bc5-k8s-coredns--76f75df574--68bh6-" Dec 13 14:09:01.140740 env[1588]: 2024-12-13 14:09:00.933 [INFO][4714] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f784df9b9ef53ee386ea2b2f265fa25381ea37325ccad5ff322f7edb0e424c45" Namespace="kube-system" Pod="coredns-76f75df574-68bh6" WorkloadEndpoint="ci--3510.3.6--a--c740448bc5-k8s-coredns--76f75df574--68bh6-eth0" Dec 13 14:09:01.140740 env[1588]: 2024-12-13 14:09:00.995 [INFO][4744] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f784df9b9ef53ee386ea2b2f265fa25381ea37325ccad5ff322f7edb0e424c45" HandleID="k8s-pod-network.f784df9b9ef53ee386ea2b2f265fa25381ea37325ccad5ff322f7edb0e424c45" Workload="ci--3510.3.6--a--c740448bc5-k8s-coredns--76f75df574--68bh6-eth0" Dec 13 14:09:01.140740 env[1588]: 2024-12-13 14:09:01.022 [INFO][4744] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f784df9b9ef53ee386ea2b2f265fa25381ea37325ccad5ff322f7edb0e424c45" HandleID="k8s-pod-network.f784df9b9ef53ee386ea2b2f265fa25381ea37325ccad5ff322f7edb0e424c45" Workload="ci--3510.3.6--a--c740448bc5-k8s-coredns--76f75df574--68bh6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002a0fd0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510.3.6-a-c740448bc5", "pod":"coredns-76f75df574-68bh6", "timestamp":"2024-12-13 14:09:00.9889944 +0000 UTC"}, Hostname:"ci-3510.3.6-a-c740448bc5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 14:09:01.140740 env[1588]: 2024-12-13 14:09:01.022 [INFO][4744] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:09:01.140740 env[1588]: 2024-12-13 14:09:01.022 [INFO][4744] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:09:01.140740 env[1588]: 2024-12-13 14:09:01.022 [INFO][4744] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.6-a-c740448bc5' Dec 13 14:09:01.140740 env[1588]: 2024-12-13 14:09:01.026 [INFO][4744] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f784df9b9ef53ee386ea2b2f265fa25381ea37325ccad5ff322f7edb0e424c45" host="ci-3510.3.6-a-c740448bc5" Dec 13 14:09:01.140740 env[1588]: 2024-12-13 14:09:01.041 [INFO][4744] ipam/ipam.go 372: Looking up existing affinities for host host="ci-3510.3.6-a-c740448bc5" Dec 13 14:09:01.140740 env[1588]: 2024-12-13 14:09:01.078 [INFO][4744] ipam/ipam.go 489: Trying affinity for 192.168.106.128/26 host="ci-3510.3.6-a-c740448bc5" Dec 13 14:09:01.140740 env[1588]: 2024-12-13 14:09:01.080 [INFO][4744] ipam/ipam.go 155: Attempting to load block cidr=192.168.106.128/26 host="ci-3510.3.6-a-c740448bc5" Dec 13 14:09:01.140740 env[1588]: 2024-12-13 14:09:01.082 [INFO][4744] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.106.128/26 host="ci-3510.3.6-a-c740448bc5" Dec 13 14:09:01.140740 env[1588]: 2024-12-13 14:09:01.083 [INFO][4744] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.106.128/26 handle="k8s-pod-network.f784df9b9ef53ee386ea2b2f265fa25381ea37325ccad5ff322f7edb0e424c45" host="ci-3510.3.6-a-c740448bc5" Dec 13 14:09:01.140740 env[1588]: 2024-12-13 14:09:01.084 [INFO][4744] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f784df9b9ef53ee386ea2b2f265fa25381ea37325ccad5ff322f7edb0e424c45 Dec 13 14:09:01.140740 env[1588]: 2024-12-13 14:09:01.089 [INFO][4744] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.106.128/26 handle="k8s-pod-network.f784df9b9ef53ee386ea2b2f265fa25381ea37325ccad5ff322f7edb0e424c45" host="ci-3510.3.6-a-c740448bc5" Dec 13 14:09:01.140740 env[1588]: 2024-12-13 14:09:01.099 [INFO][4744] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.106.133/26] block=192.168.106.128/26 handle="k8s-pod-network.f784df9b9ef53ee386ea2b2f265fa25381ea37325ccad5ff322f7edb0e424c45" host="ci-3510.3.6-a-c740448bc5" Dec 13 14:09:01.140740 env[1588]: 2024-12-13 14:09:01.100 [INFO][4744] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.106.133/26] handle="k8s-pod-network.f784df9b9ef53ee386ea2b2f265fa25381ea37325ccad5ff322f7edb0e424c45" host="ci-3510.3.6-a-c740448bc5" Dec 13 14:09:01.140740 env[1588]: 2024-12-13 14:09:01.100 [INFO][4744] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:09:01.140740 env[1588]: 2024-12-13 14:09:01.100 [INFO][4744] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.133/26] IPv6=[] ContainerID="f784df9b9ef53ee386ea2b2f265fa25381ea37325ccad5ff322f7edb0e424c45" HandleID="k8s-pod-network.f784df9b9ef53ee386ea2b2f265fa25381ea37325ccad5ff322f7edb0e424c45" Workload="ci--3510.3.6--a--c740448bc5-k8s-coredns--76f75df574--68bh6-eth0" Dec 13 14:09:01.141306 env[1588]: 2024-12-13 14:09:01.103 [INFO][4714] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f784df9b9ef53ee386ea2b2f265fa25381ea37325ccad5ff322f7edb0e424c45" Namespace="kube-system" Pod="coredns-76f75df574-68bh6" WorkloadEndpoint="ci--3510.3.6--a--c740448bc5-k8s-coredns--76f75df574--68bh6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.6--a--c740448bc5-k8s-coredns--76f75df574--68bh6-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"63d4a6f1-19ab-4534-9a5d-579c3598a6da", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 8, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.6-a-c740448bc5", ContainerID:"", Pod:"coredns-76f75df574-68bh6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic6a4145fb2d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:09:01.141306 env[1588]: 2024-12-13 14:09:01.103 [INFO][4714] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.106.133/32] ContainerID="f784df9b9ef53ee386ea2b2f265fa25381ea37325ccad5ff322f7edb0e424c45" Namespace="kube-system" Pod="coredns-76f75df574-68bh6" WorkloadEndpoint="ci--3510.3.6--a--c740448bc5-k8s-coredns--76f75df574--68bh6-eth0" Dec 13 14:09:01.141306 env[1588]: 2024-12-13 14:09:01.103 [INFO][4714] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic6a4145fb2d ContainerID="f784df9b9ef53ee386ea2b2f265fa25381ea37325ccad5ff322f7edb0e424c45" Namespace="kube-system" Pod="coredns-76f75df574-68bh6" WorkloadEndpoint="ci--3510.3.6--a--c740448bc5-k8s-coredns--76f75df574--68bh6-eth0" Dec 13 14:09:01.141306 env[1588]: 2024-12-13 14:09:01.120 [INFO][4714] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f784df9b9ef53ee386ea2b2f265fa25381ea37325ccad5ff322f7edb0e424c45" Namespace="kube-system" Pod="coredns-76f75df574-68bh6" WorkloadEndpoint="ci--3510.3.6--a--c740448bc5-k8s-coredns--76f75df574--68bh6-eth0" Dec 13 14:09:01.141306 env[1588]: 2024-12-13 14:09:01.121 [INFO][4714] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f784df9b9ef53ee386ea2b2f265fa25381ea37325ccad5ff322f7edb0e424c45" Namespace="kube-system" Pod="coredns-76f75df574-68bh6" WorkloadEndpoint="ci--3510.3.6--a--c740448bc5-k8s-coredns--76f75df574--68bh6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.6--a--c740448bc5-k8s-coredns--76f75df574--68bh6-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"63d4a6f1-19ab-4534-9a5d-579c3598a6da", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 8, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.6-a-c740448bc5", ContainerID:"f784df9b9ef53ee386ea2b2f265fa25381ea37325ccad5ff322f7edb0e424c45", Pod:"coredns-76f75df574-68bh6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic6a4145fb2d", MAC:"9a:83:b7:ce:eb:03", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:09:01.141306 env[1588]: 2024-12-13 14:09:01.135 [INFO][4714] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f784df9b9ef53ee386ea2b2f265fa25381ea37325ccad5ff322f7edb0e424c45" Namespace="kube-system" Pod="coredns-76f75df574-68bh6" WorkloadEndpoint="ci--3510.3.6--a--c740448bc5-k8s-coredns--76f75df574--68bh6-eth0" Dec 13 14:09:01.169612 env[1588]: time="2024-12-13T14:09:01.169388584Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:09:01.169612 env[1588]: time="2024-12-13T14:09:01.169435783Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:09:01.169612 env[1588]: time="2024-12-13T14:09:01.169445982Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:09:01.169946 env[1588]: time="2024-12-13T14:09:01.169887652Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f784df9b9ef53ee386ea2b2f265fa25381ea37325ccad5ff322f7edb0e424c45 pid=4821 runtime=io.containerd.runc.v2 Dec 13 14:09:01.211000 audit[4827]: NETFILTER_CFG table=filter:108 family=2 entries=50 op=nft_register_chain pid=4827 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:09:01.211000 audit[4827]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=23900 a0=3 a1=ffffcb364bd0 a2=0 a3=ffff94b54fa8 items=0 ppid=4118 pid=4827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:09:01.211000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:09:01.229912 env[1588]: time="2024-12-13T14:09:01.229866171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-68bh6,Uid:63d4a6f1-19ab-4534-9a5d-579c3598a6da,Namespace:kube-system,Attempt:1,} returns sandbox id \"f784df9b9ef53ee386ea2b2f265fa25381ea37325ccad5ff322f7edb0e424c45\"" Dec 13 14:09:01.237400 env[1588]: time="2024-12-13T14:09:01.237357711Z" level=info msg="CreateContainer within sandbox \"f784df9b9ef53ee386ea2b2f265fa25381ea37325ccad5ff322f7edb0e424c45\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 14:09:01.240362 env[1588]: time="2024-12-13T14:09:01.240325599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6777fb8766-ckq89,Uid:4c011aa9-d666-4206-8289-8d5531610d0f,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"c29ab4bf366cba717ff14b2f78fcb841eef3a3efeaf3c0ed8f38e89b133c7d7d\"" Dec 13 14:09:01.295785 env[1588]: time="2024-12-13T14:09:01.295738828Z" level=info msg="CreateContainer within sandbox \"f784df9b9ef53ee386ea2b2f265fa25381ea37325ccad5ff322f7edb0e424c45\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"48883f60ff3d58a7dac4ba7089b8c8fc49ac1b5a588b86aaa5a37c3ef1e16f99\"" Dec 13 14:09:01.297244 env[1588]: time="2024-12-13T14:09:01.296487650Z" level=info msg="StartContainer for \"48883f60ff3d58a7dac4ba7089b8c8fc49ac1b5a588b86aaa5a37c3ef1e16f99\"" Dec 13 14:09:01.349647 env[1588]: time="2024-12-13T14:09:01.349583214Z" level=info msg="StartContainer for \"48883f60ff3d58a7dac4ba7089b8c8fc49ac1b5a588b86aaa5a37c3ef1e16f99\" returns successfully" Dec 13 14:09:01.622188 systemd-networkd[1760]: cali1c7c636c887: Gained IPv6LL Dec 13 14:09:01.648438 env[1588]: time="2024-12-13T14:09:01.648398514Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:09:01.657039 env[1588]: time="2024-12-13T14:09:01.656962988Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:09:01.665841 env[1588]: time="2024-12-13T14:09:01.665784496Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:09:01.669365 env[1588]: time="2024-12-13T14:09:01.669196654Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Dec 13 14:09:01.669548 env[1588]: time="2024-12-13T14:09:01.668753625Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:09:01.670342 env[1588]: time="2024-12-13T14:09:01.670311628Z" level=info msg="StopPodSandbox for \"0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a\"" Dec 13 14:09:01.672019 env[1588]: time="2024-12-13T14:09:01.671273524Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 14:09:01.676888 env[1588]: time="2024-12-13T14:09:01.676855710Z" level=info msg="CreateContainer within sandbox \"02e869268b2ffd945cc075802eee2df1dec85d757d151faf1bc0061f31b5de56\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 14:09:01.713147 env[1588]: time="2024-12-13T14:09:01.713101439Z" level=info msg="CreateContainer within sandbox \"02e869268b2ffd945cc075802eee2df1dec85d757d151faf1bc0061f31b5de56\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"d5c705163f5ad7609478e6a0a8568ddfb0d5c62491396b9cd52a52b48de983ac\"" Dec 13 14:09:01.713940 env[1588]: time="2024-12-13T14:09:01.713879781Z" level=info msg="StartContainer for \"d5c705163f5ad7609478e6a0a8568ddfb0d5c62491396b9cd52a52b48de983ac\"" Dec 13 14:09:01.820251 env[1588]: 2024-12-13 14:09:01.747 [INFO][4922] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a" Dec 13 14:09:01.820251 env[1588]: 2024-12-13 14:09:01.747 [INFO][4922] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a" iface="eth0" netns="/var/run/netns/cni-c2bb19ec-36f9-6b72-fb82-db7938d4b50d" Dec 13 14:09:01.820251 env[1588]: 2024-12-13 14:09:01.747 [INFO][4922] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a" iface="eth0" netns="/var/run/netns/cni-c2bb19ec-36f9-6b72-fb82-db7938d4b50d" Dec 13 14:09:01.820251 env[1588]: 2024-12-13 14:09:01.747 [INFO][4922] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a" iface="eth0" netns="/var/run/netns/cni-c2bb19ec-36f9-6b72-fb82-db7938d4b50d" Dec 13 14:09:01.820251 env[1588]: 2024-12-13 14:09:01.748 [INFO][4922] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a" Dec 13 14:09:01.820251 env[1588]: 2024-12-13 14:09:01.748 [INFO][4922] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a" Dec 13 14:09:01.820251 env[1588]: 2024-12-13 14:09:01.788 [INFO][4946] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a" HandleID="k8s-pod-network.0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a" Workload="ci--3510.3.6--a--c740448bc5-k8s-coredns--76f75df574--gnqj2-eth0" Dec 13 14:09:01.820251 env[1588]: 2024-12-13 14:09:01.798 [INFO][4946] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:09:01.820251 env[1588]: 2024-12-13 14:09:01.798 [INFO][4946] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:09:01.820251 env[1588]: 2024-12-13 14:09:01.815 [WARNING][4946] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a" HandleID="k8s-pod-network.0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a" Workload="ci--3510.3.6--a--c740448bc5-k8s-coredns--76f75df574--gnqj2-eth0" Dec 13 14:09:01.820251 env[1588]: 2024-12-13 14:09:01.815 [INFO][4946] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a" HandleID="k8s-pod-network.0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a" Workload="ci--3510.3.6--a--c740448bc5-k8s-coredns--76f75df574--gnqj2-eth0" Dec 13 14:09:01.820251 env[1588]: 2024-12-13 14:09:01.816 [INFO][4946] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:09:01.820251 env[1588]: 2024-12-13 14:09:01.818 [INFO][4922] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a" Dec 13 14:09:01.826757 env[1588]: time="2024-12-13T14:09:01.825658215Z" level=info msg="TearDown network for sandbox \"0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a\" successfully" Dec 13 14:09:01.826757 env[1588]: time="2024-12-13T14:09:01.825713334Z" level=info msg="StopPodSandbox for \"0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a\" returns successfully" Dec 13 14:09:01.825942 systemd[1]: run-netns-cni\x2dc2bb19ec\x2d36f9\x2d6b72\x2dfb82\x2ddb7938d4b50d.mount: Deactivated successfully. Dec 13 14:09:01.830852 env[1588]: time="2024-12-13T14:09:01.830241145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-gnqj2,Uid:2200e8b6-ef61-4e96-abba-b05c84f6a27d,Namespace:kube-system,Attempt:1,}" Dec 13 14:09:01.858238 env[1588]: time="2024-12-13T14:09:01.858181033Z" level=info msg="StartContainer for \"d5c705163f5ad7609478e6a0a8568ddfb0d5c62491396b9cd52a52b48de983ac\" returns successfully" Dec 13 14:09:01.977000 audit[4986]: NETFILTER_CFG table=filter:109 family=2 entries=16 op=nft_register_rule pid=4986 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:09:01.977000 audit[4986]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=ffffefd36030 a2=0 a3=1 items=0 ppid=2930 pid=4986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:09:01.977000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:09:01.983000 audit[4986]: NETFILTER_CFG table=nat:110 family=2 entries=14 op=nft_register_rule pid=4986 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:09:01.983000 audit[4986]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffefd36030 a2=0 a3=1 items=0 ppid=2930 pid=4986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:09:01.983000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:09:02.111775 systemd-networkd[1760]: calid31d8bfc8e2: Link UP Dec 13 14:09:02.122894 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:09:02.123000 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calid31d8bfc8e2: link becomes ready Dec 13 14:09:02.124420 systemd-networkd[1760]: calid31d8bfc8e2: Gained carrier Dec 13 14:09:02.137362 kubelet[2790]: I1213 14:09:02.136782 2790 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-68bh6" podStartSLOduration=49.13670788 podStartE2EDuration="49.13670788s" podCreationTimestamp="2024-12-13 14:08:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:09:01.969526438 +0000 UTC m=+63.438662942" watchObservedRunningTime="2024-12-13 14:09:02.13670788 +0000 UTC m=+63.605844264" Dec 13 14:09:02.141654 env[1588]: 2024-12-13 14:09:01.934 [INFO][4969] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.6--a--c740448bc5-k8s-coredns--76f75df574--gnqj2-eth0 coredns-76f75df574- kube-system 2200e8b6-ef61-4e96-abba-b05c84f6a27d 869 0 2024-12-13 14:08:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510.3.6-a-c740448bc5 coredns-76f75df574-gnqj2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid31d8bfc8e2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="6f6cee0204f428d5ffd77e2c368e8882eaaac656d20a149aa686f74862be3aea" Namespace="kube-system" Pod="coredns-76f75df574-gnqj2" WorkloadEndpoint="ci--3510.3.6--a--c740448bc5-k8s-coredns--76f75df574--gnqj2-" Dec 13 14:09:02.141654 env[1588]: 2024-12-13 14:09:01.934 [INFO][4969] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6f6cee0204f428d5ffd77e2c368e8882eaaac656d20a149aa686f74862be3aea" Namespace="kube-system" Pod="coredns-76f75df574-gnqj2" WorkloadEndpoint="ci--3510.3.6--a--c740448bc5-k8s-coredns--76f75df574--gnqj2-eth0" Dec 13 14:09:02.141654 env[1588]: 2024-12-13 14:09:01.995 [INFO][4981] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6f6cee0204f428d5ffd77e2c368e8882eaaac656d20a149aa686f74862be3aea" HandleID="k8s-pod-network.6f6cee0204f428d5ffd77e2c368e8882eaaac656d20a149aa686f74862be3aea" Workload="ci--3510.3.6--a--c740448bc5-k8s-coredns--76f75df574--gnqj2-eth0" Dec 13 14:09:02.141654 env[1588]: 2024-12-13 14:09:02.019 [INFO][4981] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6f6cee0204f428d5ffd77e2c368e8882eaaac656d20a149aa686f74862be3aea" HandleID="k8s-pod-network.6f6cee0204f428d5ffd77e2c368e8882eaaac656d20a149aa686f74862be3aea" Workload="ci--3510.3.6--a--c740448bc5-k8s-coredns--76f75df574--gnqj2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003161c0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510.3.6-a-c740448bc5", "pod":"coredns-76f75df574-gnqj2", "timestamp":"2024-12-13 14:09:01.995524173 +0000 UTC"}, Hostname:"ci-3510.3.6-a-c740448bc5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 14:09:02.141654 env[1588]: 2024-12-13 14:09:02.019 [INFO][4981] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:09:02.141654 env[1588]: 2024-12-13 14:09:02.020 [INFO][4981] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:09:02.141654 env[1588]: 2024-12-13 14:09:02.020 [INFO][4981] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.6-a-c740448bc5' Dec 13 14:09:02.141654 env[1588]: 2024-12-13 14:09:02.048 [INFO][4981] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6f6cee0204f428d5ffd77e2c368e8882eaaac656d20a149aa686f74862be3aea" host="ci-3510.3.6-a-c740448bc5" Dec 13 14:09:02.141654 env[1588]: 2024-12-13 14:09:02.063 [INFO][4981] ipam/ipam.go 372: Looking up existing affinities for host host="ci-3510.3.6-a-c740448bc5" Dec 13 14:09:02.141654 env[1588]: 2024-12-13 14:09:02.072 [INFO][4981] ipam/ipam.go 489: Trying affinity for 192.168.106.128/26 host="ci-3510.3.6-a-c740448bc5" Dec 13 14:09:02.141654 env[1588]: 2024-12-13 14:09:02.079 [INFO][4981] ipam/ipam.go 155: Attempting to load block cidr=192.168.106.128/26 host="ci-3510.3.6-a-c740448bc5" Dec 13 14:09:02.141654 env[1588]: 2024-12-13 14:09:02.082 [INFO][4981] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.106.128/26 host="ci-3510.3.6-a-c740448bc5" Dec 13 14:09:02.141654 env[1588]: 2024-12-13 14:09:02.082 [INFO][4981] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.106.128/26 handle="k8s-pod-network.6f6cee0204f428d5ffd77e2c368e8882eaaac656d20a149aa686f74862be3aea" host="ci-3510.3.6-a-c740448bc5" Dec 13 14:09:02.141654 env[1588]: 2024-12-13 14:09:02.084 [INFO][4981] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6f6cee0204f428d5ffd77e2c368e8882eaaac656d20a149aa686f74862be3aea Dec 13 14:09:02.141654 env[1588]: 2024-12-13 14:09:02.089 [INFO][4981] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.106.128/26 handle="k8s-pod-network.6f6cee0204f428d5ffd77e2c368e8882eaaac656d20a149aa686f74862be3aea" host="ci-3510.3.6-a-c740448bc5" Dec 13 14:09:02.141654 env[1588]: 2024-12-13 14:09:02.103 [INFO][4981] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.106.134/26] block=192.168.106.128/26 handle="k8s-pod-network.6f6cee0204f428d5ffd77e2c368e8882eaaac656d20a149aa686f74862be3aea" host="ci-3510.3.6-a-c740448bc5" Dec 13 14:09:02.141654 env[1588]: 2024-12-13 14:09:02.104 [INFO][4981] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.106.134/26] handle="k8s-pod-network.6f6cee0204f428d5ffd77e2c368e8882eaaac656d20a149aa686f74862be3aea" host="ci-3510.3.6-a-c740448bc5" Dec 13 14:09:02.141654 env[1588]: 2024-12-13 14:09:02.104 [INFO][4981] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:09:02.141654 env[1588]: 2024-12-13 14:09:02.104 [INFO][4981] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.134/26] IPv6=[] ContainerID="6f6cee0204f428d5ffd77e2c368e8882eaaac656d20a149aa686f74862be3aea" HandleID="k8s-pod-network.6f6cee0204f428d5ffd77e2c368e8882eaaac656d20a149aa686f74862be3aea" Workload="ci--3510.3.6--a--c740448bc5-k8s-coredns--76f75df574--gnqj2-eth0" Dec 13 14:09:02.142235 env[1588]: 2024-12-13 14:09:02.106 [INFO][4969] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6f6cee0204f428d5ffd77e2c368e8882eaaac656d20a149aa686f74862be3aea" Namespace="kube-system" Pod="coredns-76f75df574-gnqj2" WorkloadEndpoint="ci--3510.3.6--a--c740448bc5-k8s-coredns--76f75df574--gnqj2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.6--a--c740448bc5-k8s-coredns--76f75df574--gnqj2-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"2200e8b6-ef61-4e96-abba-b05c84f6a27d", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 8, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.6-a-c740448bc5", ContainerID:"", Pod:"coredns-76f75df574-gnqj2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid31d8bfc8e2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:09:02.142235 env[1588]: 2024-12-13 14:09:02.106 [INFO][4969] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.106.134/32] ContainerID="6f6cee0204f428d5ffd77e2c368e8882eaaac656d20a149aa686f74862be3aea" Namespace="kube-system" Pod="coredns-76f75df574-gnqj2" WorkloadEndpoint="ci--3510.3.6--a--c740448bc5-k8s-coredns--76f75df574--gnqj2-eth0" Dec 13 14:09:02.142235 env[1588]: 2024-12-13 14:09:02.106 [INFO][4969] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid31d8bfc8e2 ContainerID="6f6cee0204f428d5ffd77e2c368e8882eaaac656d20a149aa686f74862be3aea" Namespace="kube-system" Pod="coredns-76f75df574-gnqj2" WorkloadEndpoint="ci--3510.3.6--a--c740448bc5-k8s-coredns--76f75df574--gnqj2-eth0" Dec 13 14:09:02.142235 env[1588]: 2024-12-13 14:09:02.125 [INFO][4969] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6f6cee0204f428d5ffd77e2c368e8882eaaac656d20a149aa686f74862be3aea" Namespace="kube-system" Pod="coredns-76f75df574-gnqj2" WorkloadEndpoint="ci--3510.3.6--a--c740448bc5-k8s-coredns--76f75df574--gnqj2-eth0" Dec 13 14:09:02.142235 env[1588]: 2024-12-13 14:09:02.125 [INFO][4969] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6f6cee0204f428d5ffd77e2c368e8882eaaac656d20a149aa686f74862be3aea" Namespace="kube-system" Pod="coredns-76f75df574-gnqj2" WorkloadEndpoint="ci--3510.3.6--a--c740448bc5-k8s-coredns--76f75df574--gnqj2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.6--a--c740448bc5-k8s-coredns--76f75df574--gnqj2-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"2200e8b6-ef61-4e96-abba-b05c84f6a27d", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 8, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.6-a-c740448bc5", ContainerID:"6f6cee0204f428d5ffd77e2c368e8882eaaac656d20a149aa686f74862be3aea", Pod:"coredns-76f75df574-gnqj2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid31d8bfc8e2", MAC:"9e:b7:f2:bd:49:76", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:09:02.142235 env[1588]: 2024-12-13 14:09:02.136 [INFO][4969] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6f6cee0204f428d5ffd77e2c368e8882eaaac656d20a149aa686f74862be3aea" Namespace="kube-system" Pod="coredns-76f75df574-gnqj2" WorkloadEndpoint="ci--3510.3.6--a--c740448bc5-k8s-coredns--76f75df574--gnqj2-eth0" Dec 13 14:09:02.150000 audit[5020]: NETFILTER_CFG table=filter:111 family=2 entries=46 op=nft_register_chain pid=5020 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:09:02.150000 audit[5020]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=21784 a0=3 a1=fffffc57d710 a2=0 a3=ffff85efbfa8 items=0 ppid=4118 pid=5020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:09:02.150000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:09:02.164591 env[1588]: time="2024-12-13T14:09:02.164510696Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:09:02.164591 env[1588]: time="2024-12-13T14:09:02.164549975Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:09:02.164591 env[1588]: time="2024-12-13T14:09:02.164559935Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:09:02.165030 env[1588]: time="2024-12-13T14:09:02.164980005Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6f6cee0204f428d5ffd77e2c368e8882eaaac656d20a149aa686f74862be3aea pid=5029 runtime=io.containerd.runc.v2 Dec 13 14:09:02.212340 env[1588]: time="2024-12-13T14:09:02.212301555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-gnqj2,Uid:2200e8b6-ef61-4e96-abba-b05c84f6a27d,Namespace:kube-system,Attempt:1,} returns sandbox id \"6f6cee0204f428d5ffd77e2c368e8882eaaac656d20a149aa686f74862be3aea\"" Dec 13 14:09:02.217566 env[1588]: time="2024-12-13T14:09:02.217506631Z" level=info msg="CreateContainer within sandbox \"6f6cee0204f428d5ffd77e2c368e8882eaaac656d20a149aa686f74862be3aea\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 14:09:02.251855 env[1588]: time="2024-12-13T14:09:02.251737853Z" level=info msg="CreateContainer within sandbox \"6f6cee0204f428d5ffd77e2c368e8882eaaac656d20a149aa686f74862be3aea\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ec9acb26715439069062ed9e95af386eb6f1c794bc94787b1098f0f662735917\"" Dec 13 14:09:02.254569 env[1588]: time="2024-12-13T14:09:02.254535266Z" level=info msg="StartContainer for \"ec9acb26715439069062ed9e95af386eb6f1c794bc94787b1098f0f662735917\"" Dec 13 14:09:02.304662 env[1588]: time="2024-12-13T14:09:02.304580991Z" level=info msg="StartContainer for \"ec9acb26715439069062ed9e95af386eb6f1c794bc94787b1098f0f662735917\" returns successfully" Dec 13 14:09:02.325745 systemd-networkd[1760]: calic6a4145fb2d: Gained IPv6LL Dec 13 14:09:02.645789 systemd-networkd[1760]: cali6cf45bcd586: Gained IPv6LL Dec 13 14:09:02.878000 audit[5099]: NETFILTER_CFG table=filter:112 family=2 entries=13 op=nft_register_rule pid=5099 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:09:02.878000 audit[5099]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3676 a0=3 a1=ffffe57399e0 a2=0 a3=1 items=0 ppid=2930 pid=5099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:09:02.878000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:09:02.882000 audit[5099]: NETFILTER_CFG table=nat:113 family=2 entries=35 op=nft_register_chain pid=5099 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:09:02.882000 audit[5099]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffe57399e0 a2=0 a3=1 items=0 ppid=2930 pid=5099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:09:02.882000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:09:02.997118 kubelet[2790]: I1213 14:09:02.997086 2790 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-gnqj2" podStartSLOduration=49.997049012 podStartE2EDuration="49.997049012s" podCreationTimestamp="2024-12-13 14:08:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:09:02.972241644 +0000 UTC m=+64.441378028" watchObservedRunningTime="2024-12-13 14:09:02.997049012 +0000 UTC m=+64.466185396" Dec 13 14:09:03.013000 audit[5102]: NETFILTER_CFG table=filter:114 family=2 entries=10 op=nft_register_rule pid=5102 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:09:03.019986 kernel: kauditd_printk_skb: 20 callbacks suppressed Dec 13 14:09:03.020122 kernel: audit: type=1325 audit(1734098943.013:419): table=filter:114 family=2 entries=10 op=nft_register_rule pid=5102 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:09:03.013000 audit[5102]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3676 a0=3 a1=ffffcd535dd0 a2=0 a3=1 items=0 ppid=2930 pid=5102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:09:03.058661 kernel: audit: type=1300 audit(1734098943.013:419): arch=c00000b7 syscall=211 success=yes exit=3676 a0=3 a1=ffffcd535dd0 a2=0 a3=1 items=0 ppid=2930 pid=5102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:09:03.013000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:09:03.071425 kernel: audit: type=1327 audit(1734098943.013:419): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:09:03.071533 kernel: audit: type=1325 audit(1734098943.033:420): table=nat:115 family=2 entries=44 op=nft_register_rule pid=5102 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:09:03.033000 audit[5102]: NETFILTER_CFG table=nat:115 family=2 entries=44 op=nft_register_rule pid=5102 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:09:03.033000 audit[5102]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffcd535dd0 a2=0 a3=1 items=0 ppid=2930 pid=5102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:09:03.109111 kernel: audit: type=1300 audit(1734098943.033:420): arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffcd535dd0 a2=0 a3=1 items=0 ppid=2930 pid=5102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:09:03.109236 kernel: audit: type=1327 audit(1734098943.033:420): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:09:03.033000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:09:04.053759 systemd-networkd[1760]: calid31d8bfc8e2: Gained IPv6LL Dec 13 14:09:04.090000 audit[5110]: NETFILTER_CFG table=filter:116 family=2 entries=10 op=nft_register_rule pid=5110 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:09:04.090000 audit[5110]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3676 a0=3 a1=ffffeb33c2e0 a2=0 a3=1 items=0 ppid=2930 pid=5110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:09:04.130860 kernel: audit: type=1325 audit(1734098944.090:421): table=filter:116 family=2 entries=10 op=nft_register_rule pid=5110 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:09:04.139390 kernel: audit: type=1300 audit(1734098944.090:421): arch=c00000b7 syscall=211 success=yes exit=3676 a0=3 a1=ffffeb33c2e0 a2=0 a3=1 items=0 ppid=2930 pid=5110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:09:04.139649 kernel: audit: type=1327 audit(1734098944.090:421): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:09:04.090000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:09:04.147000 audit[5110]: NETFILTER_CFG table=nat:117 family=2 entries=56 op=nft_register_chain pid=5110 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:09:04.147000 audit[5110]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffeb33c2e0 a2=0 a3=1 items=0 ppid=2930 pid=5110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:09:04.147000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:09:04.171618 kernel: audit: type=1325 audit(1734098944.147:422): table=nat:117 family=2 entries=56 op=nft_register_chain pid=5110 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:09:04.532540 env[1588]: time="2024-12-13T14:09:04.532494942Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:09:04.539282 env[1588]: time="2024-12-13T14:09:04.539244143Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:09:04.543057 env[1588]: time="2024-12-13T14:09:04.542994374Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:09:04.547087 env[1588]: time="2024-12-13T14:09:04.547059518Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:09:04.547519 env[1588]: time="2024-12-13T14:09:04.547490668Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Dec 13 14:09:04.552739 env[1588]: time="2024-12-13T14:09:04.551656450Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 14:09:04.555002 env[1588]: time="2024-12-13T14:09:04.554829415Z" level=info msg="CreateContainer within sandbox \"1e04b641ab40b87fb6d99b7e7ba210791490cc8b19a2289f449558b856691a51\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 14:09:04.585851 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4095787535.mount: Deactivated successfully. Dec 13 14:09:04.601340 env[1588]: time="2024-12-13T14:09:04.601269638Z" level=info msg="CreateContainer within sandbox \"1e04b641ab40b87fb6d99b7e7ba210791490cc8b19a2289f449558b856691a51\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"32150b98cf8666da794d2b746f00e27c36be2a7b28727e7d51e6daa9ba6bb973\"" Dec 13 14:09:04.604297 env[1588]: time="2024-12-13T14:09:04.603234072Z" level=info msg="StartContainer for \"32150b98cf8666da794d2b746f00e27c36be2a7b28727e7d51e6daa9ba6bb973\"" Dec 13 14:09:04.671768 env[1588]: time="2024-12-13T14:09:04.671724095Z" level=info msg="StartContainer for \"32150b98cf8666da794d2b746f00e27c36be2a7b28727e7d51e6daa9ba6bb973\" returns successfully" Dec 13 14:09:04.875339 env[1588]: time="2024-12-13T14:09:04.875232970Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:09:04.882378 env[1588]: time="2024-12-13T14:09:04.882331282Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:09:04.886122 env[1588]: time="2024-12-13T14:09:04.886088873Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:09:04.890681 env[1588]: time="2024-12-13T14:09:04.890648446Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:09:04.891278 env[1588]: time="2024-12-13T14:09:04.891247751Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Dec 13 14:09:04.894430 env[1588]: time="2024-12-13T14:09:04.894395957Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 14:09:04.894589 env[1588]: time="2024-12-13T14:09:04.894543354Z" level=info msg="CreateContainer within sandbox \"c29ab4bf366cba717ff14b2f78fcb841eef3a3efeaf3c0ed8f38e89b133c7d7d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 14:09:04.937168 env[1588]: time="2024-12-13T14:09:04.937124388Z" level=info msg="CreateContainer within sandbox \"c29ab4bf366cba717ff14b2f78fcb841eef3a3efeaf3c0ed8f38e89b133c7d7d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2ed1b7ca0cf0ad330efcad12137d3b0275f6922b7ac8523b5ddf5003b17c98c4\"" Dec 13 14:09:04.938116 env[1588]: time="2024-12-13T14:09:04.938088925Z" level=info msg="StartContainer for \"2ed1b7ca0cf0ad330efcad12137d3b0275f6922b7ac8523b5ddf5003b17c98c4\"" Dec 13 14:09:04.992735 kubelet[2790]: I1213 14:09:04.992703 2790 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6777fb8766-cqmhg" podStartSLOduration=40.613383691 podStartE2EDuration="44.992647317s" podCreationTimestamp="2024-12-13 14:08:20 +0000 UTC" firstStartedPulling="2024-12-13 14:09:00.168696231 +0000 UTC m=+61.637832575" lastFinishedPulling="2024-12-13 14:09:04.547959817 +0000 UTC m=+66.017096201" observedRunningTime="2024-12-13 14:09:04.991652461 +0000 UTC m=+66.460788845" watchObservedRunningTime="2024-12-13 14:09:04.992647317 +0000 UTC m=+66.461783701" Dec 13 14:09:05.020000 audit[5183]: NETFILTER_CFG table=filter:118 family=2 entries=10 op=nft_register_rule pid=5183 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:09:05.020000 audit[5183]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3676 a0=3 a1=fffff5b79100 a2=0 a3=1 items=0 ppid=2930 pid=5183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:09:05.020000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:09:05.026000 audit[5183]: NETFILTER_CFG table=nat:119 family=2 entries=20 op=nft_register_rule pid=5183 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:09:05.026000 audit[5183]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffff5b79100 a2=0 a3=1 items=0 ppid=2930 pid=5183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:09:05.026000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:09:05.104828 env[1588]: time="2024-12-13T14:09:05.104753804Z" level=info msg="StartContainer for \"2ed1b7ca0cf0ad330efcad12137d3b0275f6922b7ac8523b5ddf5003b17c98c4\" returns successfully" Dec 13 14:09:05.582812 systemd[1]: run-containerd-runc-k8s.io-32150b98cf8666da794d2b746f00e27c36be2a7b28727e7d51e6daa9ba6bb973-runc.R38ZZ2.mount: Deactivated successfully. Dec 13 14:09:05.994045 kubelet[2790]: I1213 14:09:05.994018 2790 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6777fb8766-ckq89" podStartSLOduration=42.344203024 podStartE2EDuration="45.993969884s" podCreationTimestamp="2024-12-13 14:08:20 +0000 UTC" firstStartedPulling="2024-12-13 14:09:01.241862762 +0000 UTC m=+62.710999186" lastFinishedPulling="2024-12-13 14:09:04.891629702 +0000 UTC m=+66.360766046" observedRunningTime="2024-12-13 14:09:05.993463216 +0000 UTC m=+67.462599600" watchObservedRunningTime="2024-12-13 14:09:05.993969884 +0000 UTC m=+67.463106268" Dec 13 14:09:06.029000 audit[5188]: NETFILTER_CFG table=filter:120 family=2 entries=10 op=nft_register_rule pid=5188 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:09:06.029000 audit[5188]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3676 a0=3 a1=ffffd0e865f0 a2=0 a3=1 items=0 ppid=2930 pid=5188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:09:06.029000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:09:06.034000 audit[5188]: NETFILTER_CFG table=nat:121 family=2 entries=20 op=nft_register_rule pid=5188 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:09:06.034000 audit[5188]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffd0e865f0 a2=0 a3=1 items=0 ppid=2930 pid=5188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:09:06.034000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:09:06.984569 env[1588]: time="2024-12-13T14:09:06.983093822Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:09:06.988658 env[1588]: time="2024-12-13T14:09:06.988625893Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:09:06.992788 env[1588]: time="2024-12-13T14:09:06.992748157Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:09:06.996444 env[1588]: time="2024-12-13T14:09:06.996416911Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:09:06.997050 env[1588]: time="2024-12-13T14:09:06.997020857Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Dec 13 14:09:06.999723 env[1588]: time="2024-12-13T14:09:06.999672955Z" level=info msg="CreateContainer within sandbox \"02e869268b2ffd945cc075802eee2df1dec85d757d151faf1bc0061f31b5de56\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 14:09:07.027448 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3269426322.mount: Deactivated successfully. Dec 13 14:09:07.039817 env[1588]: time="2024-12-13T14:09:07.039777623Z" level=info msg="CreateContainer within sandbox \"02e869268b2ffd945cc075802eee2df1dec85d757d151faf1bc0061f31b5de56\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"d6abb4d19dd05cba963bc0625f6d523714df9f710ffa680887cf35b582eec643\"" Dec 13 14:09:07.041861 env[1588]: time="2024-12-13T14:09:07.040791080Z" level=info msg="StartContainer for \"d6abb4d19dd05cba963bc0625f6d523714df9f710ffa680887cf35b582eec643\"" Dec 13 14:09:07.053000 audit[5197]: NETFILTER_CFG table=filter:122 family=2 entries=9 op=nft_register_rule pid=5197 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:09:07.053000 audit[5197]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=ffffc1e19310 a2=0 a3=1 items=0 ppid=2930 pid=5197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:09:07.053000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:09:07.058000 audit[5197]: NETFILTER_CFG table=nat:123 family=2 entries=31 op=nft_register_chain pid=5197 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:09:07.058000 audit[5197]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11236 a0=3 a1=ffffc1e19310 a2=0 a3=1 items=0 ppid=2930 pid=5197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:09:07.058000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:09:07.110330 env[1588]: time="2024-12-13T14:09:07.110286305Z" level=info msg="StartContainer for \"d6abb4d19dd05cba963bc0625f6d523714df9f710ffa680887cf35b582eec643\" returns successfully" Dec 13 14:09:07.912131 kubelet[2790]: I1213 14:09:07.912100 2790 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 14:09:07.912540 kubelet[2790]: I1213 14:09:07.912528 2790 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 14:09:11.131110 systemd[1]: run-containerd-runc-k8s.io-5a62e217dcb2167ca05d154847bc9b1073d7a71776d0e69e1a62b342157abfc6-runc.QRQvMv.mount: Deactivated successfully. Dec 13 14:09:15.547420 systemd[1]: run-containerd-runc-k8s.io-ba62ab3582242843fc2979ed0b06c83b7d6b2b19e21ad25d5011cee87b9e8a42-runc.QlMPue.mount: Deactivated successfully. Dec 13 14:09:15.617943 kubelet[2790]: I1213 14:09:15.617539 2790 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-9jgbk" podStartSLOduration=46.72249168 podStartE2EDuration="54.617500581s" podCreationTimestamp="2024-12-13 14:08:21 +0000 UTC" firstStartedPulling="2024-12-13 14:08:59.10271342 +0000 UTC m=+60.571849764" lastFinishedPulling="2024-12-13 14:09:06.997722281 +0000 UTC m=+68.466858665" observedRunningTime="2024-12-13 14:09:07.996779109 +0000 UTC m=+69.465915493" watchObservedRunningTime="2024-12-13 14:09:15.617500581 +0000 UTC m=+77.086636965" Dec 13 14:09:32.018431 systemd[1]: run-containerd-runc-k8s.io-5a62e217dcb2167ca05d154847bc9b1073d7a71776d0e69e1a62b342157abfc6-runc.qJ65ck.mount: Deactivated successfully. Dec 13 14:09:45.546866 systemd[1]: run-containerd-runc-k8s.io-ba62ab3582242843fc2979ed0b06c83b7d6b2b19e21ad25d5011cee87b9e8a42-runc.WWaNPq.mount: Deactivated successfully. Dec 13 14:09:59.011624 env[1588]: time="2024-12-13T14:09:59.011455686Z" level=info msg="StopPodSandbox for \"be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531\"" Dec 13 14:09:59.094376 env[1588]: 2024-12-13 14:09:59.060 [WARNING][5352] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.6--a--c740448bc5-k8s-csi--node--driver--9jgbk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e35f131e-6a5b-4f9b-80ee-8f99f7186350", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 8, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.6-a-c740448bc5", ContainerID:"02e869268b2ffd945cc075802eee2df1dec85d757d151faf1bc0061f31b5de56", Pod:"csi-node-driver-9jgbk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.106.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2aecb272a3b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:09:59.094376 env[1588]: 2024-12-13 14:09:59.060 [INFO][5352] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531" Dec 13 14:09:59.094376 env[1588]: 2024-12-13 14:09:59.060 [INFO][5352] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531" iface="eth0" netns="" Dec 13 14:09:59.094376 env[1588]: 2024-12-13 14:09:59.060 [INFO][5352] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531" Dec 13 14:09:59.094376 env[1588]: 2024-12-13 14:09:59.060 [INFO][5352] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531" Dec 13 14:09:59.094376 env[1588]: 2024-12-13 14:09:59.080 [INFO][5358] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531" HandleID="k8s-pod-network.be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531" Workload="ci--3510.3.6--a--c740448bc5-k8s-csi--node--driver--9jgbk-eth0" Dec 13 14:09:59.094376 env[1588]: 2024-12-13 14:09:59.081 [INFO][5358] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:09:59.094376 env[1588]: 2024-12-13 14:09:59.081 [INFO][5358] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:09:59.094376 env[1588]: 2024-12-13 14:09:59.090 [WARNING][5358] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531" HandleID="k8s-pod-network.be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531" Workload="ci--3510.3.6--a--c740448bc5-k8s-csi--node--driver--9jgbk-eth0" Dec 13 14:09:59.094376 env[1588]: 2024-12-13 14:09:59.090 [INFO][5358] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531" HandleID="k8s-pod-network.be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531" Workload="ci--3510.3.6--a--c740448bc5-k8s-csi--node--driver--9jgbk-eth0" Dec 13 14:09:59.094376 env[1588]: 2024-12-13 14:09:59.091 [INFO][5358] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:09:59.094376 env[1588]: 2024-12-13 14:09:59.093 [INFO][5352] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531" Dec 13 14:09:59.094915 env[1588]: time="2024-12-13T14:09:59.094884082Z" level=info msg="TearDown network for sandbox \"be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531\" successfully" Dec 13 14:09:59.094983 env[1588]: time="2024-12-13T14:09:59.094967529Z" level=info msg="StopPodSandbox for \"be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531\" returns successfully" Dec 13 14:09:59.095526 env[1588]: time="2024-12-13T14:09:59.095485846Z" level=info msg="RemovePodSandbox for \"be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531\"" Dec 13 14:09:59.095639 env[1588]: time="2024-12-13T14:09:59.095531569Z" level=info msg="Forcibly stopping sandbox \"be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531\"" Dec 13 14:09:59.180370 env[1588]: 2024-12-13 14:09:59.132 [WARNING][5376] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.6--a--c740448bc5-k8s-csi--node--driver--9jgbk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e35f131e-6a5b-4f9b-80ee-8f99f7186350", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 8, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.6-a-c740448bc5", ContainerID:"02e869268b2ffd945cc075802eee2df1dec85d757d151faf1bc0061f31b5de56", Pod:"csi-node-driver-9jgbk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.106.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2aecb272a3b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:09:59.180370 env[1588]: 2024-12-13 14:09:59.132 [INFO][5376] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531" Dec 13 14:09:59.180370 env[1588]: 2024-12-13 14:09:59.132 [INFO][5376] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531" iface="eth0" netns="" Dec 13 14:09:59.180370 env[1588]: 2024-12-13 14:09:59.132 [INFO][5376] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531" Dec 13 14:09:59.180370 env[1588]: 2024-12-13 14:09:59.132 [INFO][5376] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531" Dec 13 14:09:59.180370 env[1588]: 2024-12-13 14:09:59.165 [INFO][5382] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531" HandleID="k8s-pod-network.be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531" Workload="ci--3510.3.6--a--c740448bc5-k8s-csi--node--driver--9jgbk-eth0" Dec 13 14:09:59.180370 env[1588]: 2024-12-13 14:09:59.165 [INFO][5382] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:09:59.180370 env[1588]: 2024-12-13 14:09:59.166 [INFO][5382] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:09:59.180370 env[1588]: 2024-12-13 14:09:59.176 [WARNING][5382] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531" HandleID="k8s-pod-network.be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531" Workload="ci--3510.3.6--a--c740448bc5-k8s-csi--node--driver--9jgbk-eth0" Dec 13 14:09:59.180370 env[1588]: 2024-12-13 14:09:59.176 [INFO][5382] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531" HandleID="k8s-pod-network.be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531" Workload="ci--3510.3.6--a--c740448bc5-k8s-csi--node--driver--9jgbk-eth0" Dec 13 14:09:59.180370 env[1588]: 2024-12-13 14:09:59.178 [INFO][5382] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:09:59.180370 env[1588]: 2024-12-13 14:09:59.179 [INFO][5376] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531" Dec 13 14:09:59.180968 env[1588]: time="2024-12-13T14:09:59.180934789Z" level=info msg="TearDown network for sandbox \"be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531\" successfully" Dec 13 14:09:59.189891 env[1588]: time="2024-12-13T14:09:59.189855234Z" level=info msg="RemovePodSandbox \"be27ff7d8361c3ed4767c7a93b464e756a9038f15baf0babed2bbd5ddfc2d531\" returns successfully" Dec 13 14:09:59.190530 env[1588]: time="2024-12-13T14:09:59.190493840Z" level=info msg="StopPodSandbox for \"53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a\"" Dec 13 14:09:59.255934 env[1588]: 2024-12-13 14:09:59.225 [WARNING][5400] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.6--a--c740448bc5-k8s-coredns--76f75df574--68bh6-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"63d4a6f1-19ab-4534-9a5d-579c3598a6da", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 8, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.6-a-c740448bc5", ContainerID:"f784df9b9ef53ee386ea2b2f265fa25381ea37325ccad5ff322f7edb0e424c45", Pod:"coredns-76f75df574-68bh6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic6a4145fb2d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:09:59.255934 env[1588]: 2024-12-13 14:09:59.226 [INFO][5400] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a" Dec 13 14:09:59.255934 env[1588]: 2024-12-13 14:09:59.226 [INFO][5400] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a" iface="eth0" netns="" Dec 13 14:09:59.255934 env[1588]: 2024-12-13 14:09:59.226 [INFO][5400] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a" Dec 13 14:09:59.255934 env[1588]: 2024-12-13 14:09:59.226 [INFO][5400] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a" Dec 13 14:09:59.255934 env[1588]: 2024-12-13 14:09:59.243 [INFO][5406] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a" HandleID="k8s-pod-network.53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a" Workload="ci--3510.3.6--a--c740448bc5-k8s-coredns--76f75df574--68bh6-eth0" Dec 13 14:09:59.255934 env[1588]: 2024-12-13 14:09:59.244 [INFO][5406] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:09:59.255934 env[1588]: 2024-12-13 14:09:59.244 [INFO][5406] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:09:59.255934 env[1588]: 2024-12-13 14:09:59.252 [WARNING][5406] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a" HandleID="k8s-pod-network.53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a" Workload="ci--3510.3.6--a--c740448bc5-k8s-coredns--76f75df574--68bh6-eth0" Dec 13 14:09:59.255934 env[1588]: 2024-12-13 14:09:59.252 [INFO][5406] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a" HandleID="k8s-pod-network.53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a" Workload="ci--3510.3.6--a--c740448bc5-k8s-coredns--76f75df574--68bh6-eth0" Dec 13 14:09:59.255934 env[1588]: 2024-12-13 14:09:59.253 [INFO][5406] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:09:59.255934 env[1588]: 2024-12-13 14:09:59.254 [INFO][5400] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a" Dec 13 14:09:59.256406 env[1588]: time="2024-12-13T14:09:59.255976178Z" level=info msg="TearDown network for sandbox \"53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a\" successfully" Dec 13 14:09:59.256406 env[1588]: time="2024-12-13T14:09:59.256009101Z" level=info msg="StopPodSandbox for \"53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a\" returns successfully" Dec 13 14:09:59.256940 env[1588]: time="2024-12-13T14:09:59.256908806Z" level=info msg="RemovePodSandbox for \"53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a\"" Dec 13 14:09:59.257012 env[1588]: time="2024-12-13T14:09:59.256952209Z" level=info msg="Forcibly stopping sandbox \"53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a\"" Dec 13 14:09:59.321683 env[1588]: 2024-12-13 14:09:59.291 [WARNING][5424] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.6--a--c740448bc5-k8s-coredns--76f75df574--68bh6-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"63d4a6f1-19ab-4534-9a5d-579c3598a6da", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 8, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.6-a-c740448bc5", ContainerID:"f784df9b9ef53ee386ea2b2f265fa25381ea37325ccad5ff322f7edb0e424c45", Pod:"coredns-76f75df574-68bh6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic6a4145fb2d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:09:59.321683 env[1588]: 2024-12-13 14:09:59.291 [INFO][5424] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a" Dec 13 14:09:59.321683 env[1588]: 2024-12-13 14:09:59.291 [INFO][5424] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a" iface="eth0" netns="" Dec 13 14:09:59.321683 env[1588]: 2024-12-13 14:09:59.291 [INFO][5424] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a" Dec 13 14:09:59.321683 env[1588]: 2024-12-13 14:09:59.291 [INFO][5424] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a" Dec 13 14:09:59.321683 env[1588]: 2024-12-13 14:09:59.308 [INFO][5430] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a" HandleID="k8s-pod-network.53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a" Workload="ci--3510.3.6--a--c740448bc5-k8s-coredns--76f75df574--68bh6-eth0" Dec 13 14:09:59.321683 env[1588]: 2024-12-13 14:09:59.309 [INFO][5430] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:09:59.321683 env[1588]: 2024-12-13 14:09:59.309 [INFO][5430] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:09:59.321683 env[1588]: 2024-12-13 14:09:59.317 [WARNING][5430] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a" HandleID="k8s-pod-network.53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a" Workload="ci--3510.3.6--a--c740448bc5-k8s-coredns--76f75df574--68bh6-eth0" Dec 13 14:09:59.321683 env[1588]: 2024-12-13 14:09:59.317 [INFO][5430] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a" HandleID="k8s-pod-network.53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a" Workload="ci--3510.3.6--a--c740448bc5-k8s-coredns--76f75df574--68bh6-eth0" Dec 13 14:09:59.321683 env[1588]: 2024-12-13 14:09:59.318 [INFO][5430] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:09:59.321683 env[1588]: 2024-12-13 14:09:59.320 [INFO][5424] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a" Dec 13 14:09:59.322159 env[1588]: time="2024-12-13T14:09:59.322126605Z" level=info msg="TearDown network for sandbox \"53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a\" successfully" Dec 13 14:09:59.329919 env[1588]: time="2024-12-13T14:09:59.329888766Z" level=info msg="RemovePodSandbox \"53ba5b023d73c8a7dc5644b4773762efe847e69e6049fbcd15cffdcc722c3c4a\" returns successfully" Dec 13 14:09:59.330515 env[1588]: time="2024-12-13T14:09:59.330494250Z" level=info msg="StopPodSandbox for \"ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0\"" Dec 13 14:09:59.406821 env[1588]: 2024-12-13 14:09:59.376 [WARNING][5450] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.6--a--c740448bc5-k8s-calico--apiserver--6777fb8766--cqmhg-eth0", GenerateName:"calico-apiserver-6777fb8766-", Namespace:"calico-apiserver", SelfLink:"", UID:"26820ebc-4ace-4cbe-bb2e-a9e912553e07", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 8, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6777fb8766", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.6-a-c740448bc5", ContainerID:"1e04b641ab40b87fb6d99b7e7ba210791490cc8b19a2289f449558b856691a51", Pod:"calico-apiserver-6777fb8766-cqmhg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1c7c636c887", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:09:59.406821 env[1588]: 2024-12-13 14:09:59.377 [INFO][5450] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0" Dec 13 14:09:59.406821 env[1588]: 2024-12-13 14:09:59.377 [INFO][5450] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0" iface="eth0" netns="" Dec 13 14:09:59.406821 env[1588]: 2024-12-13 14:09:59.377 [INFO][5450] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0" Dec 13 14:09:59.406821 env[1588]: 2024-12-13 14:09:59.377 [INFO][5450] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0" Dec 13 14:09:59.406821 env[1588]: 2024-12-13 14:09:59.395 [INFO][5456] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0" HandleID="k8s-pod-network.ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0" Workload="ci--3510.3.6--a--c740448bc5-k8s-calico--apiserver--6777fb8766--cqmhg-eth0" Dec 13 14:09:59.406821 env[1588]: 2024-12-13 14:09:59.395 [INFO][5456] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:09:59.406821 env[1588]: 2024-12-13 14:09:59.395 [INFO][5456] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:09:59.406821 env[1588]: 2024-12-13 14:09:59.403 [WARNING][5456] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0" HandleID="k8s-pod-network.ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0" Workload="ci--3510.3.6--a--c740448bc5-k8s-calico--apiserver--6777fb8766--cqmhg-eth0" Dec 13 14:09:59.406821 env[1588]: 2024-12-13 14:09:59.403 [INFO][5456] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0" HandleID="k8s-pod-network.ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0" Workload="ci--3510.3.6--a--c740448bc5-k8s-calico--apiserver--6777fb8766--cqmhg-eth0" Dec 13 14:09:59.406821 env[1588]: 2024-12-13 14:09:59.404 [INFO][5456] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:09:59.406821 env[1588]: 2024-12-13 14:09:59.405 [INFO][5450] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0" Dec 13 14:09:59.407372 env[1588]: time="2024-12-13T14:09:59.407330610Z" level=info msg="TearDown network for sandbox \"ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0\" successfully" Dec 13 14:09:59.407446 env[1588]: time="2024-12-13T14:09:59.407430937Z" level=info msg="StopPodSandbox for \"ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0\" returns successfully" Dec 13 14:09:59.408050 env[1588]: time="2024-12-13T14:09:59.408027460Z" level=info msg="RemovePodSandbox for \"ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0\"" Dec 13 14:09:59.408312 env[1588]: time="2024-12-13T14:09:59.408263517Z" level=info msg="Forcibly stopping sandbox \"ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0\"" Dec 13 14:09:59.482506 env[1588]: 2024-12-13 14:09:59.442 [WARNING][5474] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.6--a--c740448bc5-k8s-calico--apiserver--6777fb8766--cqmhg-eth0", GenerateName:"calico-apiserver-6777fb8766-", Namespace:"calico-apiserver", SelfLink:"", UID:"26820ebc-4ace-4cbe-bb2e-a9e912553e07", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 8, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6777fb8766", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.6-a-c740448bc5", ContainerID:"1e04b641ab40b87fb6d99b7e7ba210791490cc8b19a2289f449558b856691a51", Pod:"calico-apiserver-6777fb8766-cqmhg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1c7c636c887", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:09:59.482506 env[1588]: 2024-12-13 14:09:59.442 [INFO][5474] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0" Dec 13 14:09:59.482506 env[1588]: 2024-12-13 14:09:59.442 [INFO][5474] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0" iface="eth0" netns="" Dec 13 14:09:59.482506 env[1588]: 2024-12-13 14:09:59.442 [INFO][5474] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0" Dec 13 14:09:59.482506 env[1588]: 2024-12-13 14:09:59.442 [INFO][5474] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0" Dec 13 14:09:59.482506 env[1588]: 2024-12-13 14:09:59.462 [INFO][5480] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0" HandleID="k8s-pod-network.ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0" Workload="ci--3510.3.6--a--c740448bc5-k8s-calico--apiserver--6777fb8766--cqmhg-eth0" Dec 13 14:09:59.482506 env[1588]: 2024-12-13 14:09:59.462 [INFO][5480] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:09:59.482506 env[1588]: 2024-12-13 14:09:59.462 [INFO][5480] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:09:59.482506 env[1588]: 2024-12-13 14:09:59.478 [WARNING][5480] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0" HandleID="k8s-pod-network.ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0" Workload="ci--3510.3.6--a--c740448bc5-k8s-calico--apiserver--6777fb8766--cqmhg-eth0" Dec 13 14:09:59.482506 env[1588]: 2024-12-13 14:09:59.478 [INFO][5480] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0" HandleID="k8s-pod-network.ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0" Workload="ci--3510.3.6--a--c740448bc5-k8s-calico--apiserver--6777fb8766--cqmhg-eth0" Dec 13 14:09:59.482506 env[1588]: 2024-12-13 14:09:59.479 [INFO][5480] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:09:59.482506 env[1588]: 2024-12-13 14:09:59.481 [INFO][5474] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0" Dec 13 14:09:59.483059 env[1588]: time="2024-12-13T14:09:59.483017086Z" level=info msg="TearDown network for sandbox \"ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0\" successfully" Dec 13 14:09:59.492198 env[1588]: time="2024-12-13T14:09:59.492157627Z" level=info msg="RemovePodSandbox \"ccc835fdd224c1c964307aa643bb0b1b9c420bb3ceb00dd5ba1dfc98f68a73b0\" returns successfully" Dec 13 14:09:59.492835 env[1588]: time="2024-12-13T14:09:59.492809954Z" level=info msg="StopPodSandbox for \"0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a\"" Dec 13 14:09:59.560963 env[1588]: 2024-12-13 14:09:59.528 [WARNING][5498] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.6--a--c740448bc5-k8s-coredns--76f75df574--gnqj2-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"2200e8b6-ef61-4e96-abba-b05c84f6a27d", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 8, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.6-a-c740448bc5", ContainerID:"6f6cee0204f428d5ffd77e2c368e8882eaaac656d20a149aa686f74862be3aea", Pod:"coredns-76f75df574-gnqj2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid31d8bfc8e2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:09:59.560963 env[1588]: 2024-12-13 14:09:59.528 [INFO][5498] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a" Dec 13 14:09:59.560963 env[1588]: 2024-12-13 14:09:59.528 [INFO][5498] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a" iface="eth0" netns="" Dec 13 14:09:59.560963 env[1588]: 2024-12-13 14:09:59.528 [INFO][5498] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a" Dec 13 14:09:59.560963 env[1588]: 2024-12-13 14:09:59.528 [INFO][5498] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a" Dec 13 14:09:59.560963 env[1588]: 2024-12-13 14:09:59.547 [INFO][5505] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a" HandleID="k8s-pod-network.0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a" Workload="ci--3510.3.6--a--c740448bc5-k8s-coredns--76f75df574--gnqj2-eth0" Dec 13 14:09:59.560963 env[1588]: 2024-12-13 14:09:59.548 [INFO][5505] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:09:59.560963 env[1588]: 2024-12-13 14:09:59.548 [INFO][5505] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:09:59.560963 env[1588]: 2024-12-13 14:09:59.557 [WARNING][5505] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a" HandleID="k8s-pod-network.0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a" Workload="ci--3510.3.6--a--c740448bc5-k8s-coredns--76f75df574--gnqj2-eth0" Dec 13 14:09:59.560963 env[1588]: 2024-12-13 14:09:59.557 [INFO][5505] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a" HandleID="k8s-pod-network.0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a" Workload="ci--3510.3.6--a--c740448bc5-k8s-coredns--76f75df574--gnqj2-eth0" Dec 13 14:09:59.560963 env[1588]: 2024-12-13 14:09:59.558 [INFO][5505] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:09:59.560963 env[1588]: 2024-12-13 14:09:59.559 [INFO][5498] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a" Dec 13 14:09:59.561548 env[1588]: time="2024-12-13T14:09:59.561504885Z" level=info msg="TearDown network for sandbox \"0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a\" successfully" Dec 13 14:09:59.561638 env[1588]: time="2024-12-13T14:09:59.561621253Z" level=info msg="StopPodSandbox for \"0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a\" returns successfully" Dec 13 14:09:59.562242 env[1588]: time="2024-12-13T14:09:59.562218176Z" level=info msg="RemovePodSandbox for \"0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a\"" Dec 13 14:09:59.562468 env[1588]: time="2024-12-13T14:09:59.562430952Z" level=info msg="Forcibly stopping sandbox \"0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a\"" Dec 13 14:09:59.632107 env[1588]: 2024-12-13 14:09:59.596 [WARNING][5523] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.6--a--c740448bc5-k8s-coredns--76f75df574--gnqj2-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"2200e8b6-ef61-4e96-abba-b05c84f6a27d", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 8, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.6-a-c740448bc5", ContainerID:"6f6cee0204f428d5ffd77e2c368e8882eaaac656d20a149aa686f74862be3aea", Pod:"coredns-76f75df574-gnqj2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid31d8bfc8e2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:09:59.632107 env[1588]: 2024-12-13 14:09:59.596 [INFO][5523] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a" Dec 13 14:09:59.632107 env[1588]: 2024-12-13 14:09:59.596 [INFO][5523] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a" iface="eth0" netns="" Dec 13 14:09:59.632107 env[1588]: 2024-12-13 14:09:59.596 [INFO][5523] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a" Dec 13 14:09:59.632107 env[1588]: 2024-12-13 14:09:59.596 [INFO][5523] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a" Dec 13 14:09:59.632107 env[1588]: 2024-12-13 14:09:59.615 [INFO][5529] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a" HandleID="k8s-pod-network.0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a" Workload="ci--3510.3.6--a--c740448bc5-k8s-coredns--76f75df574--gnqj2-eth0" Dec 13 14:09:59.632107 env[1588]: 2024-12-13 14:09:59.615 [INFO][5529] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:09:59.632107 env[1588]: 2024-12-13 14:09:59.615 [INFO][5529] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:09:59.632107 env[1588]: 2024-12-13 14:09:59.626 [WARNING][5529] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a" HandleID="k8s-pod-network.0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a" Workload="ci--3510.3.6--a--c740448bc5-k8s-coredns--76f75df574--gnqj2-eth0" Dec 13 14:09:59.632107 env[1588]: 2024-12-13 14:09:59.626 [INFO][5529] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a" HandleID="k8s-pod-network.0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a" Workload="ci--3510.3.6--a--c740448bc5-k8s-coredns--76f75df574--gnqj2-eth0" Dec 13 14:09:59.632107 env[1588]: 2024-12-13 14:09:59.629 [INFO][5529] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:09:59.632107 env[1588]: 2024-12-13 14:09:59.631 [INFO][5523] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a" Dec 13 14:09:59.632620 env[1588]: time="2024-12-13T14:09:59.632565226Z" level=info msg="TearDown network for sandbox \"0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a\" successfully" Dec 13 14:09:59.640057 env[1588]: time="2024-12-13T14:09:59.640024686Z" level=info msg="RemovePodSandbox \"0271004e942902d6c6785a9317f0cfc73836c57f74b9cf571f0d414fa455f23a\" returns successfully" Dec 13 14:09:59.640752 env[1588]: time="2024-12-13T14:09:59.640717656Z" level=info msg="StopPodSandbox for \"ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d\"" Dec 13 14:09:59.708795 env[1588]: 2024-12-13 14:09:59.677 [WARNING][5548] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.6--a--c740448bc5-k8s-calico--apiserver--6777fb8766--ckq89-eth0", GenerateName:"calico-apiserver-6777fb8766-", Namespace:"calico-apiserver", SelfLink:"", UID:"4c011aa9-d666-4206-8289-8d5531610d0f", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 8, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6777fb8766", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.6-a-c740448bc5", ContainerID:"c29ab4bf366cba717ff14b2f78fcb841eef3a3efeaf3c0ed8f38e89b133c7d7d", Pod:"calico-apiserver-6777fb8766-ckq89", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6cf45bcd586", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:09:59.708795 env[1588]: 2024-12-13 14:09:59.677 [INFO][5548] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d" Dec 13 14:09:59.708795 env[1588]: 2024-12-13 14:09:59.677 [INFO][5548] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d" iface="eth0" netns="" Dec 13 14:09:59.708795 env[1588]: 2024-12-13 14:09:59.677 [INFO][5548] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d" Dec 13 14:09:59.708795 env[1588]: 2024-12-13 14:09:59.677 [INFO][5548] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d" Dec 13 14:09:59.708795 env[1588]: 2024-12-13 14:09:59.697 [INFO][5555] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d" HandleID="k8s-pod-network.ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d" Workload="ci--3510.3.6--a--c740448bc5-k8s-calico--apiserver--6777fb8766--ckq89-eth0" Dec 13 14:09:59.708795 env[1588]: 2024-12-13 14:09:59.697 [INFO][5555] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:09:59.708795 env[1588]: 2024-12-13 14:09:59.697 [INFO][5555] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:09:59.708795 env[1588]: 2024-12-13 14:09:59.705 [WARNING][5555] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d" HandleID="k8s-pod-network.ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d" Workload="ci--3510.3.6--a--c740448bc5-k8s-calico--apiserver--6777fb8766--ckq89-eth0" Dec 13 14:09:59.708795 env[1588]: 2024-12-13 14:09:59.705 [INFO][5555] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d" HandleID="k8s-pod-network.ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d" Workload="ci--3510.3.6--a--c740448bc5-k8s-calico--apiserver--6777fb8766--ckq89-eth0" Dec 13 14:09:59.708795 env[1588]: 2024-12-13 14:09:59.706 [INFO][5555] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:09:59.708795 env[1588]: 2024-12-13 14:09:59.707 [INFO][5548] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d" Dec 13 14:09:59.709324 env[1588]: time="2024-12-13T14:09:59.709281857Z" level=info msg="TearDown network for sandbox \"ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d\" successfully" Dec 13 14:09:59.709396 env[1588]: time="2024-12-13T14:09:59.709380664Z" level=info msg="StopPodSandbox for \"ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d\" returns successfully" Dec 13 14:09:59.709963 env[1588]: time="2024-12-13T14:09:59.709941545Z" level=info msg="RemovePodSandbox for \"ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d\"" Dec 13 14:09:59.710111 env[1588]: time="2024-12-13T14:09:59.710071874Z" level=info msg="Forcibly stopping sandbox \"ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d\"" Dec 13 14:09:59.779085 env[1588]: 2024-12-13 14:09:59.744 [WARNING][5573] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.6--a--c740448bc5-k8s-calico--apiserver--6777fb8766--ckq89-eth0", GenerateName:"calico-apiserver-6777fb8766-", Namespace:"calico-apiserver", SelfLink:"", UID:"4c011aa9-d666-4206-8289-8d5531610d0f", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 8, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6777fb8766", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.6-a-c740448bc5", ContainerID:"c29ab4bf366cba717ff14b2f78fcb841eef3a3efeaf3c0ed8f38e89b133c7d7d", Pod:"calico-apiserver-6777fb8766-ckq89", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6cf45bcd586", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:09:59.779085 env[1588]: 2024-12-13 14:09:59.744 [INFO][5573] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d" Dec 13 14:09:59.779085 env[1588]: 2024-12-13 14:09:59.744 [INFO][5573] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d" iface="eth0" netns="" Dec 13 14:09:59.779085 env[1588]: 2024-12-13 14:09:59.744 [INFO][5573] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d" Dec 13 14:09:59.779085 env[1588]: 2024-12-13 14:09:59.744 [INFO][5573] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d" Dec 13 14:09:59.779085 env[1588]: 2024-12-13 14:09:59.763 [INFO][5580] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d" HandleID="k8s-pod-network.ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d" Workload="ci--3510.3.6--a--c740448bc5-k8s-calico--apiserver--6777fb8766--ckq89-eth0" Dec 13 14:09:59.779085 env[1588]: 2024-12-13 14:09:59.763 [INFO][5580] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:09:59.779085 env[1588]: 2024-12-13 14:09:59.763 [INFO][5580] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:09:59.779085 env[1588]: 2024-12-13 14:09:59.775 [WARNING][5580] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d" HandleID="k8s-pod-network.ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d" Workload="ci--3510.3.6--a--c740448bc5-k8s-calico--apiserver--6777fb8766--ckq89-eth0" Dec 13 14:09:59.779085 env[1588]: 2024-12-13 14:09:59.775 [INFO][5580] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d" HandleID="k8s-pod-network.ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d" Workload="ci--3510.3.6--a--c740448bc5-k8s-calico--apiserver--6777fb8766--ckq89-eth0" Dec 13 14:09:59.779085 env[1588]: 2024-12-13 14:09:59.776 [INFO][5580] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:09:59.779085 env[1588]: 2024-12-13 14:09:59.777 [INFO][5573] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d" Dec 13 14:09:59.779617 env[1588]: time="2024-12-13T14:09:59.779564423Z" level=info msg="TearDown network for sandbox \"ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d\" successfully" Dec 13 14:09:59.787267 env[1588]: time="2024-12-13T14:09:59.787232697Z" level=info msg="RemovePodSandbox \"ac55a2c445dfad816738d1328da21773e7bfae4ece68cd138729e02399fe586d\" returns successfully" Dec 13 14:10:02.021488 systemd[1]: run-containerd-runc-k8s.io-5a62e217dcb2167ca05d154847bc9b1073d7a71776d0e69e1a62b342157abfc6-runc.JzXK4u.mount: Deactivated successfully. Dec 13 14:10:11.133550 systemd[1]: run-containerd-runc-k8s.io-5a62e217dcb2167ca05d154847bc9b1073d7a71776d0e69e1a62b342157abfc6-runc.H45zvP.mount: Deactivated successfully. Dec 13 14:10:15.547345 systemd[1]: run-containerd-runc-k8s.io-ba62ab3582242843fc2979ed0b06c83b7d6b2b19e21ad25d5011cee87b9e8a42-runc.UGBCJ0.mount: Deactivated successfully. Dec 13 14:10:30.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.36:22-10.200.16.10:50588 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:10:30.194221 systemd[1]: Started sshd@7-10.200.20.36:22-10.200.16.10:50588.service. Dec 13 14:10:30.199632 kernel: kauditd_printk_skb: 20 callbacks suppressed Dec 13 14:10:30.199901 kernel: audit: type=1130 audit(1734099030.193:429): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.36:22-10.200.16.10:50588 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:10:30.635000 audit[5682]: USER_ACCT pid=5682 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:30.643145 sshd[5682]: Accepted publickey for core from 10.200.16.10 port 50588 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:10:30.645307 sshd[5682]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:10:30.641000 audit[5682]: CRED_ACQ pid=5682 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:30.665156 systemd[1]: Started session-10.scope. Dec 13 14:10:30.665670 systemd-logind[1575]: New session 10 of user core. Dec 13 14:10:30.682091 kernel: audit: type=1101 audit(1734099030.635:430): pid=5682 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:30.684646 kernel: audit: type=1103 audit(1734099030.641:431): pid=5682 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:30.684749 kernel: audit: type=1006 audit(1734099030.641:432): pid=5682 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Dec 13 14:10:30.641000 audit[5682]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffee983d50 a2=3 a3=1 items=0 ppid=1 pid=5682 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:10:30.719805 kernel: audit: type=1300 audit(1734099030.641:432): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffee983d50 a2=3 a3=1 items=0 ppid=1 pid=5682 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:10:30.641000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:10:30.727876 kernel: audit: type=1327 audit(1734099030.641:432): proctitle=737368643A20636F7265205B707269765D Dec 13 14:10:30.682000 audit[5682]: USER_START pid=5682 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:30.754215 kernel: audit: type=1105 audit(1734099030.682:433): pid=5682 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:30.693000 audit[5685]: CRED_ACQ pid=5685 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:30.775658 kernel: audit: type=1103 audit(1734099030.693:434): pid=5685 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:31.038717 sshd[5682]: pam_unix(sshd:session): session closed for user core Dec 13 14:10:31.039000 audit[5682]: USER_END pid=5682 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:31.042966 systemd-logind[1575]: Session 10 logged out. Waiting for processes to exit. Dec 13 14:10:31.044607 systemd[1]: sshd@7-10.200.20.36:22-10.200.16.10:50588.service: Deactivated successfully. Dec 13 14:10:31.045580 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 14:10:31.047212 systemd-logind[1575]: Removed session 10. Dec 13 14:10:31.039000 audit[5682]: CRED_DISP pid=5682 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:31.092461 kernel: audit: type=1106 audit(1734099031.039:435): pid=5682 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:31.092611 kernel: audit: type=1104 audit(1734099031.039:436): pid=5682 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:31.039000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.36:22-10.200.16.10:50588 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:10:36.109111 systemd[1]: Started sshd@8-10.200.20.36:22-10.200.16.10:50604.service. Dec 13 14:10:36.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.36:22-10.200.16.10:50604 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:10:36.114283 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 14:10:36.114390 kernel: audit: type=1130 audit(1734099036.107:438): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.36:22-10.200.16.10:50604 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:10:36.539000 audit[5715]: USER_ACCT pid=5715 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:36.540905 sshd[5715]: Accepted publickey for core from 10.200.16.10 port 50604 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:10:36.565630 kernel: audit: type=1101 audit(1734099036.539:439): pid=5715 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:36.564000 audit[5715]: CRED_ACQ pid=5715 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:36.569829 sshd[5715]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:10:36.601097 kernel: audit: type=1103 audit(1734099036.564:440): pid=5715 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:36.601390 kernel: audit: type=1006 audit(1734099036.564:441): pid=5715 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Dec 13 14:10:36.564000 audit[5715]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd92d4c70 a2=3 a3=1 items=0 ppid=1 pid=5715 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:10:36.624364 kernel: audit: type=1300 audit(1734099036.564:441): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd92d4c70 a2=3 a3=1 items=0 ppid=1 pid=5715 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:10:36.564000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:10:36.632492 kernel: audit: type=1327 audit(1734099036.564:441): proctitle=737368643A20636F7265205B707269765D Dec 13 14:10:36.635935 systemd[1]: Started session-11.scope. Dec 13 14:10:36.636150 systemd-logind[1575]: New session 11 of user core. Dec 13 14:10:36.638000 audit[5715]: USER_START pid=5715 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:36.639000 audit[5718]: CRED_ACQ pid=5718 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:36.687993 kernel: audit: type=1105 audit(1734099036.638:442): pid=5715 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:36.688198 kernel: audit: type=1103 audit(1734099036.639:443): pid=5718 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:36.969441 sshd[5715]: pam_unix(sshd:session): session closed for user core Dec 13 14:10:36.969000 audit[5715]: USER_END pid=5715 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:36.975585 systemd[1]: sshd@8-10.200.20.36:22-10.200.16.10:50604.service: Deactivated successfully. Dec 13 14:10:36.976481 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 14:10:36.972000 audit[5715]: CRED_DISP pid=5715 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:36.997839 systemd-logind[1575]: Session 11 logged out. Waiting for processes to exit. Dec 13 14:10:36.998897 systemd-logind[1575]: Removed session 11. Dec 13 14:10:37.018330 kernel: audit: type=1106 audit(1734099036.969:444): pid=5715 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:37.018490 kernel: audit: type=1104 audit(1734099036.972:445): pid=5715 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:36.974000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.36:22-10.200.16.10:50604 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:10:42.037753 systemd[1]: Started sshd@9-10.200.20.36:22-10.200.16.10:60208.service. Dec 13 14:10:42.064962 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 14:10:42.065069 kernel: audit: type=1130 audit(1734099042.036:447): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.36:22-10.200.16.10:60208 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:10:42.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.36:22-10.200.16.10:60208 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:10:42.454565 sshd[5729]: Accepted publickey for core from 10.200.16.10 port 60208 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:10:42.453000 audit[5729]: USER_ACCT pid=5729 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:42.478000 audit[5729]: CRED_ACQ pid=5729 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:42.501271 kernel: audit: type=1101 audit(1734099042.453:448): pid=5729 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:42.501376 kernel: audit: type=1103 audit(1734099042.478:449): pid=5729 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:42.478781 sshd[5729]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:10:42.515726 kernel: audit: type=1006 audit(1734099042.478:450): pid=5729 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Dec 13 14:10:42.540447 kernel: audit: type=1300 audit(1734099042.478:450): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff4c75760 a2=3 a3=1 items=0 ppid=1 pid=5729 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:10:42.478000 audit[5729]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff4c75760 a2=3 a3=1 items=0 ppid=1 pid=5729 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:10:42.478000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:10:42.548509 kernel: audit: type=1327 audit(1734099042.478:450): proctitle=737368643A20636F7265205B707269765D Dec 13 14:10:42.552056 systemd[1]: Started session-12.scope. Dec 13 14:10:42.552288 systemd-logind[1575]: New session 12 of user core. Dec 13 14:10:42.557000 audit[5729]: USER_START pid=5729 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:42.584588 kernel: audit: type=1105 audit(1734099042.557:451): pid=5729 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:42.584695 kernel: audit: type=1103 audit(1734099042.583:452): pid=5732 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:42.583000 audit[5732]: CRED_ACQ pid=5732 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:42.894130 sshd[5729]: pam_unix(sshd:session): session closed for user core Dec 13 14:10:42.894000 audit[5729]: USER_END pid=5729 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:42.898475 systemd[1]: sshd@9-10.200.20.36:22-10.200.16.10:60208.service: Deactivated successfully. Dec 13 14:10:42.899417 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 14:10:42.896000 audit[5729]: CRED_DISP pid=5729 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:42.945819 kernel: audit: type=1106 audit(1734099042.894:453): pid=5729 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:42.945998 kernel: audit: type=1104 audit(1734099042.896:454): pid=5729 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:42.898000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.36:22-10.200.16.10:60208 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:10:42.946455 systemd-logind[1575]: Session 12 logged out. Waiting for processes to exit. Dec 13 14:10:42.947573 systemd-logind[1575]: Removed session 12. Dec 13 14:10:47.963051 systemd[1]: Started sshd@10-10.200.20.36:22-10.200.16.10:60222.service. Dec 13 14:10:47.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.20.36:22-10.200.16.10:60222 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:10:47.970778 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 14:10:47.970915 kernel: audit: type=1130 audit(1734099047.962:456): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.20.36:22-10.200.16.10:60222 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:10:48.386000 audit[5764]: USER_ACCT pid=5764 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:48.387192 sshd[5764]: Accepted publickey for core from 10.200.16.10 port 60222 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:10:48.389158 sshd[5764]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:10:48.388000 audit[5764]: CRED_ACQ pid=5764 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:48.433235 kernel: audit: type=1101 audit(1734099048.386:457): pid=5764 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:48.433327 kernel: audit: type=1103 audit(1734099048.388:458): pid=5764 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:48.447829 kernel: audit: type=1006 audit(1734099048.388:459): pid=5764 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Dec 13 14:10:48.388000 audit[5764]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcd376750 a2=3 a3=1 items=0 ppid=1 pid=5764 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:10:48.472594 kernel: audit: type=1300 audit(1734099048.388:459): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcd376750 a2=3 a3=1 items=0 ppid=1 pid=5764 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:10:48.388000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:10:48.481000 kernel: audit: type=1327 audit(1734099048.388:459): proctitle=737368643A20636F7265205B707269765D Dec 13 14:10:48.484721 systemd[1]: Started session-13.scope. Dec 13 14:10:48.485228 systemd-logind[1575]: New session 13 of user core. Dec 13 14:10:48.491000 audit[5764]: USER_START pid=5764 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:48.492000 audit[5767]: CRED_ACQ pid=5767 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:48.539982 kernel: audit: type=1105 audit(1734099048.491:460): pid=5764 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:48.540125 kernel: audit: type=1103 audit(1734099048.492:461): pid=5767 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:48.806730 sshd[5764]: pam_unix(sshd:session): session closed for user core Dec 13 14:10:48.807000 audit[5764]: USER_END pid=5764 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:48.810040 systemd[1]: sshd@10-10.200.20.36:22-10.200.16.10:60222.service: Deactivated successfully. Dec 13 14:10:48.810898 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 14:10:48.807000 audit[5764]: CRED_DISP pid=5764 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:48.834014 systemd-logind[1575]: Session 13 logged out. Waiting for processes to exit. Dec 13 14:10:48.835089 systemd-logind[1575]: Removed session 13. Dec 13 14:10:48.853502 kernel: audit: type=1106 audit(1734099048.807:462): pid=5764 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:48.853671 kernel: audit: type=1104 audit(1734099048.807:463): pid=5764 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:48.807000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.20.36:22-10.200.16.10:60222 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:10:48.874237 systemd[1]: Started sshd@11-10.200.20.36:22-10.200.16.10:39734.service. Dec 13 14:10:48.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.20.36:22-10.200.16.10:39734 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:10:49.293000 audit[5778]: USER_ACCT pid=5778 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:49.295153 sshd[5778]: Accepted publickey for core from 10.200.16.10 port 39734 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:10:49.294000 audit[5778]: CRED_ACQ pid=5778 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:49.295000 audit[5778]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffca48e940 a2=3 a3=1 items=0 ppid=1 pid=5778 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:10:49.295000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:10:49.296648 sshd[5778]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:10:49.301453 systemd-logind[1575]: New session 14 of user core. Dec 13 14:10:49.301853 systemd[1]: Started session-14.scope. Dec 13 14:10:49.308000 audit[5778]: USER_START pid=5778 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:49.310000 audit[5781]: CRED_ACQ pid=5781 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:49.719924 sshd[5778]: pam_unix(sshd:session): session closed for user core Dec 13 14:10:49.720000 audit[5778]: USER_END pid=5778 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:49.720000 audit[5778]: CRED_DISP pid=5778 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:49.722789 systemd[1]: sshd@11-10.200.20.36:22-10.200.16.10:39734.service: Deactivated successfully. Dec 13 14:10:49.722000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.20.36:22-10.200.16.10:39734 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:10:49.723822 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 14:10:49.723868 systemd-logind[1575]: Session 14 logged out. Waiting for processes to exit. Dec 13 14:10:49.724871 systemd-logind[1575]: Removed session 14. Dec 13 14:10:49.788206 systemd[1]: Started sshd@12-10.200.20.36:22-10.200.16.10:39750.service. Dec 13 14:10:49.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.36:22-10.200.16.10:39750 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:10:50.209000 audit[5789]: USER_ACCT pid=5789 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:50.210320 sshd[5789]: Accepted publickey for core from 10.200.16.10 port 39750 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:10:50.211000 audit[5789]: CRED_ACQ pid=5789 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:50.211000 audit[5789]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff80ef700 a2=3 a3=1 items=0 ppid=1 pid=5789 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:10:50.211000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:10:50.211984 sshd[5789]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:10:50.216711 systemd[1]: Started session-15.scope. Dec 13 14:10:50.216957 systemd-logind[1575]: New session 15 of user core. Dec 13 14:10:50.221000 audit[5789]: USER_START pid=5789 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:50.222000 audit[5792]: CRED_ACQ pid=5792 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:50.619974 sshd[5789]: pam_unix(sshd:session): session closed for user core Dec 13 14:10:50.620000 audit[5789]: USER_END pid=5789 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:50.621000 audit[5789]: CRED_DISP pid=5789 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:50.624110 systemd[1]: sshd@12-10.200.20.36:22-10.200.16.10:39750.service: Deactivated successfully. Dec 13 14:10:50.625005 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 14:10:50.623000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.36:22-10.200.16.10:39750 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:10:50.625417 systemd-logind[1575]: Session 15 logged out. Waiting for processes to exit. Dec 13 14:10:50.626164 systemd-logind[1575]: Removed session 15. Dec 13 14:10:55.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.36:22-10.200.16.10:39756 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:10:55.687550 systemd[1]: Started sshd@13-10.200.20.36:22-10.200.16.10:39756.service. Dec 13 14:10:55.692411 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 13 14:10:55.692535 kernel: audit: type=1130 audit(1734099055.687:483): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.36:22-10.200.16.10:39756 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:10:56.118000 audit[5806]: USER_ACCT pid=5806 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:56.119090 sshd[5806]: Accepted publickey for core from 10.200.16.10 port 39756 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:10:56.145643 kernel: audit: type=1101 audit(1734099056.118:484): pid=5806 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:56.144000 audit[5806]: CRED_ACQ pid=5806 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:56.145798 sshd[5806]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:10:56.184197 kernel: audit: type=1103 audit(1734099056.144:485): pid=5806 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:56.184371 kernel: audit: type=1006 audit(1734099056.144:486): pid=5806 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Dec 13 14:10:56.144000 audit[5806]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe1b9e7e0 a2=3 a3=1 items=0 ppid=1 pid=5806 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:10:56.144000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:10:56.217635 kernel: audit: type=1300 audit(1734099056.144:486): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe1b9e7e0 a2=3 a3=1 items=0 ppid=1 pid=5806 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:10:56.217733 kernel: audit: type=1327 audit(1734099056.144:486): proctitle=737368643A20636F7265205B707269765D Dec 13 14:10:56.220923 systemd-logind[1575]: New session 16 of user core. Dec 13 14:10:56.221403 systemd[1]: Started session-16.scope. Dec 13 14:10:56.225000 audit[5806]: USER_START pid=5806 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:56.252643 kernel: audit: type=1105 audit(1734099056.225:487): pid=5806 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:56.252000 audit[5809]: CRED_ACQ pid=5809 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:56.275628 kernel: audit: type=1103 audit(1734099056.252:488): pid=5809 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:56.573938 sshd[5806]: pam_unix(sshd:session): session closed for user core Dec 13 14:10:56.574000 audit[5806]: USER_END pid=5806 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:56.599866 systemd[1]: sshd@13-10.200.20.36:22-10.200.16.10:39756.service: Deactivated successfully. Dec 13 14:10:56.574000 audit[5806]: CRED_DISP pid=5806 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:56.620716 kernel: audit: type=1106 audit(1734099056.574:489): pid=5806 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:56.620861 kernel: audit: type=1104 audit(1734099056.574:490): pid=5806 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:56.621928 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 14:10:56.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.36:22-10.200.16.10:39756 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:10:56.621970 systemd-logind[1575]: Session 16 logged out. Waiting for processes to exit. Dec 13 14:10:56.623345 systemd-logind[1575]: Removed session 16. Dec 13 14:10:56.643623 systemd[1]: Started sshd@14-10.200.20.36:22-10.200.16.10:39766.service. Dec 13 14:10:56.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.36:22-10.200.16.10:39766 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:10:57.073000 audit[5819]: USER_ACCT pid=5819 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:57.075390 sshd[5819]: Accepted publickey for core from 10.200.16.10 port 39766 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:10:57.075000 audit[5819]: CRED_ACQ pid=5819 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:57.075000 audit[5819]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffec171500 a2=3 a3=1 items=0 ppid=1 pid=5819 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:10:57.075000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:10:57.076009 sshd[5819]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:10:57.079731 systemd-logind[1575]: New session 17 of user core. Dec 13 14:10:57.080522 systemd[1]: Started session-17.scope. Dec 13 14:10:57.084000 audit[5819]: USER_START pid=5819 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:57.086000 audit[5822]: CRED_ACQ pid=5822 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:57.542032 sshd[5819]: pam_unix(sshd:session): session closed for user core Dec 13 14:10:57.542000 audit[5819]: USER_END pid=5819 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:57.542000 audit[5819]: CRED_DISP pid=5819 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:57.547441 systemd-logind[1575]: Session 17 logged out. Waiting for processes to exit. Dec 13 14:10:57.548343 systemd[1]: sshd@14-10.200.20.36:22-10.200.16.10:39766.service: Deactivated successfully. Dec 13 14:10:57.548000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.36:22-10.200.16.10:39766 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:10:57.549328 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 14:10:57.550797 systemd-logind[1575]: Removed session 17. Dec 13 14:10:57.612154 systemd[1]: Started sshd@15-10.200.20.36:22-10.200.16.10:39774.service. Dec 13 14:10:57.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.36:22-10.200.16.10:39774 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:10:58.041000 audit[5831]: USER_ACCT pid=5831 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:58.041869 sshd[5831]: Accepted publickey for core from 10.200.16.10 port 39774 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:10:58.042000 audit[5831]: CRED_ACQ pid=5831 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:58.042000 audit[5831]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe6bcf910 a2=3 a3=1 items=0 ppid=1 pid=5831 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:10:58.042000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:10:58.043677 sshd[5831]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:10:58.048387 systemd-logind[1575]: New session 18 of user core. Dec 13 14:10:58.049749 systemd[1]: Started session-18.scope. Dec 13 14:10:58.053000 audit[5831]: USER_START pid=5831 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:58.055000 audit[5834]: CRED_ACQ pid=5834 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:59.851000 audit[5852]: NETFILTER_CFG table=filter:124 family=2 entries=20 op=nft_register_rule pid=5852 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:10:59.851000 audit[5852]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11860 a0=3 a1=ffffee13e000 a2=0 a3=1 items=0 ppid=2930 pid=5852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:10:59.851000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:10:59.855000 audit[5852]: NETFILTER_CFG table=nat:125 family=2 entries=22 op=nft_register_rule pid=5852 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:10:59.855000 audit[5852]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6540 a0=3 a1=ffffee13e000 a2=0 a3=1 items=0 ppid=2930 pid=5852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:10:59.855000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:10:59.877000 audit[5854]: NETFILTER_CFG table=filter:126 family=2 entries=32 op=nft_register_rule pid=5854 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:10:59.877000 audit[5854]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11860 a0=3 a1=ffffffc5b060 a2=0 a3=1 items=0 ppid=2930 pid=5854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:10:59.877000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:10:59.885000 audit[5854]: NETFILTER_CFG table=nat:127 family=2 entries=22 op=nft_register_rule pid=5854 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:10:59.885000 audit[5854]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6540 a0=3 a1=ffffffc5b060 a2=0 a3=1 items=0 ppid=2930 pid=5854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:10:59.885000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:10:59.948111 sshd[5831]: pam_unix(sshd:session): session closed for user core Dec 13 14:10:59.949000 audit[5831]: USER_END pid=5831 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:59.949000 audit[5831]: CRED_DISP pid=5831 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:10:59.951000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.36:22-10.200.16.10:39774 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:10:59.951375 systemd[1]: sshd@15-10.200.20.36:22-10.200.16.10:39774.service: Deactivated successfully. Dec 13 14:10:59.952867 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 14:10:59.953195 systemd-logind[1575]: Session 18 logged out. Waiting for processes to exit. Dec 13 14:10:59.954185 systemd-logind[1575]: Removed session 18. Dec 13 14:11:00.014529 systemd[1]: Started sshd@16-10.200.20.36:22-10.200.16.10:54238.service. Dec 13 14:11:00.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.20.36:22-10.200.16.10:54238 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:11:00.425000 audit[5857]: USER_ACCT pid=5857 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:00.425885 sshd[5857]: Accepted publickey for core from 10.200.16.10 port 54238 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:11:00.426000 audit[5857]: CRED_ACQ pid=5857 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:00.426000 audit[5857]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd626e5a0 a2=3 a3=1 items=0 ppid=1 pid=5857 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:11:00.426000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:11:00.427212 sshd[5857]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:11:00.434293 systemd[1]: Started session-19.scope. Dec 13 14:11:00.434797 systemd-logind[1575]: New session 19 of user core. Dec 13 14:11:00.439000 audit[5857]: USER_START pid=5857 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:00.440000 audit[5863]: CRED_ACQ pid=5863 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:00.931314 sshd[5857]: pam_unix(sshd:session): session closed for user core Dec 13 14:11:00.931000 audit[5857]: USER_END pid=5857 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:00.937609 kernel: kauditd_printk_skb: 43 callbacks suppressed Dec 13 14:11:00.937756 kernel: audit: type=1106 audit(1734099060.931:520): pid=5857 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:00.940792 systemd[1]: sshd@16-10.200.20.36:22-10.200.16.10:54238.service: Deactivated successfully. Dec 13 14:11:00.941655 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 14:11:00.943020 systemd-logind[1575]: Session 19 logged out. Waiting for processes to exit. Dec 13 14:11:00.944136 systemd-logind[1575]: Removed session 19. Dec 13 14:11:00.932000 audit[5857]: CRED_DISP pid=5857 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:00.986779 kernel: audit: type=1104 audit(1734099060.932:521): pid=5857 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:00.940000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.20.36:22-10.200.16.10:54238 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:11:01.002392 systemd[1]: Started sshd@17-10.200.20.36:22-10.200.16.10:54250.service. Dec 13 14:11:01.008439 kernel: audit: type=1131 audit(1734099060.940:522): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.20.36:22-10.200.16.10:54238 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:11:01.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.20.36:22-10.200.16.10:54250 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:11:01.029973 kernel: audit: type=1130 audit(1734099061.002:523): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.20.36:22-10.200.16.10:54250 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:11:01.434000 audit[5871]: USER_ACCT pid=5871 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:01.434903 sshd[5871]: Accepted publickey for core from 10.200.16.10 port 54250 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:11:01.436895 sshd[5871]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:11:01.435000 audit[5871]: CRED_ACQ pid=5871 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:01.462993 systemd[1]: Started session-20.scope. Dec 13 14:11:01.463947 systemd-logind[1575]: New session 20 of user core. Dec 13 14:11:01.479709 kernel: audit: type=1101 audit(1734099061.434:524): pid=5871 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:01.479827 kernel: audit: type=1103 audit(1734099061.435:525): pid=5871 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:01.493712 kernel: audit: type=1006 audit(1734099061.435:526): pid=5871 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Dec 13 14:11:01.435000 audit[5871]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff851e000 a2=3 a3=1 items=0 ppid=1 pid=5871 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:11:01.516712 kernel: audit: type=1300 audit(1734099061.435:526): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff851e000 a2=3 a3=1 items=0 ppid=1 pid=5871 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:11:01.435000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:11:01.524686 kernel: audit: type=1327 audit(1734099061.435:526): proctitle=737368643A20636F7265205B707269765D Dec 13 14:11:01.524785 kernel: audit: type=1105 audit(1734099061.468:527): pid=5871 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:01.468000 audit[5871]: USER_START pid=5871 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:01.475000 audit[5874]: CRED_ACQ pid=5874 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:01.822989 sshd[5871]: pam_unix(sshd:session): session closed for user core Dec 13 14:11:01.822000 audit[5871]: USER_END pid=5871 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:01.822000 audit[5871]: CRED_DISP pid=5871 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:01.826032 systemd-logind[1575]: Session 20 logged out. Waiting for processes to exit. Dec 13 14:11:01.825000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.20.36:22-10.200.16.10:54250 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:11:01.826798 systemd[1]: sshd@17-10.200.20.36:22-10.200.16.10:54250.service: Deactivated successfully. Dec 13 14:11:01.827687 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 14:11:01.829085 systemd-logind[1575]: Removed session 20. Dec 13 14:11:02.022746 systemd[1]: run-containerd-runc-k8s.io-5a62e217dcb2167ca05d154847bc9b1073d7a71776d0e69e1a62b342157abfc6-runc.zs3K6N.mount: Deactivated successfully. Dec 13 14:11:03.496633 update_engine[1579]: I1213 14:11:03.496399 1579 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Dec 13 14:11:03.496633 update_engine[1579]: I1213 14:11:03.496456 1579 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Dec 13 14:11:03.497212 update_engine[1579]: I1213 14:11:03.497190 1579 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Dec 13 14:11:03.497563 update_engine[1579]: I1213 14:11:03.497540 1579 omaha_request_params.cc:62] Current group set to lts Dec 13 14:11:03.498460 update_engine[1579]: I1213 14:11:03.498440 1579 update_attempter.cc:499] Already updated boot flags. Skipping. Dec 13 14:11:03.498460 update_engine[1579]: I1213 14:11:03.498454 1579 update_attempter.cc:643] Scheduling an action processor start. Dec 13 14:11:03.498549 update_engine[1579]: I1213 14:11:03.498473 1579 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 13 14:11:03.498549 update_engine[1579]: I1213 14:11:03.498505 1579 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Dec 13 14:11:03.499005 update_engine[1579]: I1213 14:11:03.498976 1579 omaha_request_action.cc:270] Posting an Omaha request to disabled Dec 13 14:11:03.499005 update_engine[1579]: I1213 14:11:03.498996 1579 omaha_request_action.cc:271] Request: Dec 13 14:11:03.499005 update_engine[1579]: Dec 13 14:11:03.499005 update_engine[1579]: Dec 13 14:11:03.499005 update_engine[1579]: Dec 13 14:11:03.499005 update_engine[1579]: Dec 13 14:11:03.499005 update_engine[1579]: Dec 13 14:11:03.499005 update_engine[1579]: Dec 13 14:11:03.499005 update_engine[1579]: Dec 13 14:11:03.499005 update_engine[1579]: Dec 13 14:11:03.499005 update_engine[1579]: I1213 14:11:03.499001 1579 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 14:11:03.500621 locksmithd[1686]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Dec 13 14:11:03.515443 update_engine[1579]: I1213 14:11:03.515405 1579 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 14:11:03.515866 update_engine[1579]: I1213 14:11:03.515843 1579 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 14:11:03.567793 update_engine[1579]: E1213 14:11:03.567657 1579 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 14:11:03.567793 update_engine[1579]: I1213 14:11:03.567781 1579 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Dec 13 14:11:06.421000 audit[5903]: NETFILTER_CFG table=filter:128 family=2 entries=20 op=nft_register_rule pid=5903 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:11:06.427684 kernel: kauditd_printk_skb: 4 callbacks suppressed Dec 13 14:11:06.427768 kernel: audit: type=1325 audit(1734099066.421:532): table=filter:128 family=2 entries=20 op=nft_register_rule pid=5903 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:11:06.421000 audit[5903]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=ffffc81d3060 a2=0 a3=1 items=0 ppid=2930 pid=5903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:11:06.469318 kernel: audit: type=1300 audit(1734099066.421:532): arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=ffffc81d3060 a2=0 a3=1 items=0 ppid=2930 pid=5903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:11:06.421000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:11:06.482829 kernel: audit: type=1327 audit(1734099066.421:532): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:11:06.440000 audit[5903]: NETFILTER_CFG table=nat:129 family=2 entries=106 op=nft_register_chain pid=5903 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:11:06.496697 kernel: audit: type=1325 audit(1734099066.440:533): table=nat:129 family=2 entries=106 op=nft_register_chain pid=5903 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:11:06.440000 audit[5903]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=49452 a0=3 a1=ffffc81d3060 a2=0 a3=1 items=0 ppid=2930 pid=5903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:11:06.524987 kernel: audit: type=1300 audit(1734099066.440:533): arch=c00000b7 syscall=211 success=yes exit=49452 a0=3 a1=ffffc81d3060 a2=0 a3=1 items=0 ppid=2930 pid=5903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:11:06.440000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:11:06.538187 kernel: audit: type=1327 audit(1734099066.440:533): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:11:06.889111 systemd[1]: Started sshd@18-10.200.20.36:22-10.200.16.10:54254.service. Dec 13 14:11:06.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.20.36:22-10.200.16.10:54254 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:11:06.913625 kernel: audit: type=1130 audit(1734099066.888:534): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.20.36:22-10.200.16.10:54254 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:11:07.299000 audit[5905]: USER_ACCT pid=5905 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:07.301632 sshd[5905]: Accepted publickey for core from 10.200.16.10 port 54254 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:11:07.326622 kernel: audit: type=1101 audit(1734099067.299:535): pid=5905 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:07.325000 audit[5905]: CRED_ACQ pid=5905 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:07.328062 sshd[5905]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:11:07.364372 kernel: audit: type=1103 audit(1734099067.325:536): pid=5905 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:07.364474 kernel: audit: type=1006 audit(1734099067.326:537): pid=5905 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Dec 13 14:11:07.326000 audit[5905]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe423f2e0 a2=3 a3=1 items=0 ppid=1 pid=5905 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:11:07.326000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:11:07.368760 systemd-logind[1575]: New session 21 of user core. Dec 13 14:11:07.369318 systemd[1]: Started session-21.scope. Dec 13 14:11:07.373000 audit[5905]: USER_START pid=5905 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:07.374000 audit[5908]: CRED_ACQ pid=5908 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:07.706696 sshd[5905]: pam_unix(sshd:session): session closed for user core Dec 13 14:11:07.706000 audit[5905]: USER_END pid=5905 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:07.706000 audit[5905]: CRED_DISP pid=5905 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:07.709829 systemd-logind[1575]: Session 21 logged out. Waiting for processes to exit. Dec 13 14:11:07.710421 systemd[1]: sshd@18-10.200.20.36:22-10.200.16.10:54254.service: Deactivated successfully. Dec 13 14:11:07.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.20.36:22-10.200.16.10:54254 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:11:07.711341 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 14:11:07.711757 systemd-logind[1575]: Removed session 21. Dec 13 14:11:12.779075 systemd[1]: Started sshd@19-10.200.20.36:22-10.200.16.10:41034.service. Dec 13 14:11:12.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.36:22-10.200.16.10:41034 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:11:12.785634 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 13 14:11:12.785751 kernel: audit: type=1130 audit(1734099072.777:543): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.36:22-10.200.16.10:41034 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:11:13.207000 audit[5938]: USER_ACCT pid=5938 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:13.209781 sshd[5938]: Accepted publickey for core from 10.200.16.10 port 41034 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:11:13.233637 kernel: audit: type=1101 audit(1734099073.207:544): pid=5938 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:13.232000 audit[5938]: CRED_ACQ pid=5938 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:13.236710 sshd[5938]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:11:13.269972 kernel: audit: type=1103 audit(1734099073.232:545): pid=5938 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:13.270109 kernel: audit: type=1006 audit(1734099073.232:546): pid=5938 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Dec 13 14:11:13.232000 audit[5938]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe15f4000 a2=3 a3=1 items=0 ppid=1 pid=5938 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:11:13.294582 kernel: audit: type=1300 audit(1734099073.232:546): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe15f4000 a2=3 a3=1 items=0 ppid=1 pid=5938 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:11:13.232000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:11:13.301336 systemd-logind[1575]: New session 22 of user core. Dec 13 14:11:13.302036 systemd[1]: Started session-22.scope. Dec 13 14:11:13.303107 kernel: audit: type=1327 audit(1734099073.232:546): proctitle=737368643A20636F7265205B707269765D Dec 13 14:11:13.305000 audit[5938]: USER_START pid=5938 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:13.307000 audit[5941]: CRED_ACQ pid=5941 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:13.356477 kernel: audit: type=1105 audit(1734099073.305:547): pid=5938 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:13.356773 kernel: audit: type=1103 audit(1734099073.307:548): pid=5941 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:13.495694 update_engine[1579]: I1213 14:11:13.495368 1579 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 14:11:13.496314 update_engine[1579]: I1213 14:11:13.496037 1579 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 14:11:13.496314 update_engine[1579]: I1213 14:11:13.496284 1579 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 14:11:13.604633 update_engine[1579]: E1213 14:11:13.604470 1579 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 14:11:13.604633 update_engine[1579]: I1213 14:11:13.604578 1579 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Dec 13 14:11:13.634116 sshd[5938]: pam_unix(sshd:session): session closed for user core Dec 13 14:11:13.633000 audit[5938]: USER_END pid=5938 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:13.633000 audit[5938]: CRED_DISP pid=5938 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:13.664226 systemd-logind[1575]: Session 22 logged out. Waiting for processes to exit. Dec 13 14:11:13.665649 systemd[1]: sshd@19-10.200.20.36:22-10.200.16.10:41034.service: Deactivated successfully. Dec 13 14:11:13.666527 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 14:11:13.668072 systemd-logind[1575]: Removed session 22. Dec 13 14:11:13.684086 kernel: audit: type=1106 audit(1734099073.633:549): pid=5938 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:13.684249 kernel: audit: type=1104 audit(1734099073.633:550): pid=5938 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:13.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.36:22-10.200.16.10:41034 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:11:18.702230 systemd[1]: Started sshd@20-10.200.20.36:22-10.200.16.10:38506.service. Dec 13 14:11:18.728826 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 14:11:18.728946 kernel: audit: type=1130 audit(1734099078.700:552): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.36:22-10.200.16.10:38506 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:11:18.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.36:22-10.200.16.10:38506 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:11:19.118000 audit[5975]: USER_ACCT pid=5975 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:19.120146 sshd[5975]: Accepted publickey for core from 10.200.16.10 port 38506 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:11:19.122081 sshd[5975]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:11:19.120000 audit[5975]: CRED_ACQ pid=5975 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:19.169387 kernel: audit: type=1101 audit(1734099079.118:553): pid=5975 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:19.169508 kernel: audit: type=1103 audit(1734099079.120:554): pid=5975 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:19.169537 kernel: audit: type=1006 audit(1734099079.120:555): pid=5975 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Dec 13 14:11:19.185694 kernel: audit: type=1300 audit(1734099079.120:555): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc51418b0 a2=3 a3=1 items=0 ppid=1 pid=5975 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:11:19.120000 audit[5975]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc51418b0 a2=3 a3=1 items=0 ppid=1 pid=5975 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:11:19.120000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:11:19.220391 kernel: audit: type=1327 audit(1734099079.120:555): proctitle=737368643A20636F7265205B707269765D Dec 13 14:11:19.224326 systemd-logind[1575]: New session 23 of user core. Dec 13 14:11:19.225823 systemd[1]: Started session-23.scope. Dec 13 14:11:19.231000 audit[5975]: USER_START pid=5975 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:19.233000 audit[5978]: CRED_ACQ pid=5978 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:19.280857 kernel: audit: type=1105 audit(1734099079.231:556): pid=5975 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:19.280991 kernel: audit: type=1103 audit(1734099079.233:557): pid=5978 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:19.552584 sshd[5975]: pam_unix(sshd:session): session closed for user core Dec 13 14:11:19.552000 audit[5975]: USER_END pid=5975 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:19.556177 systemd[1]: sshd@20-10.200.20.36:22-10.200.16.10:38506.service: Deactivated successfully. Dec 13 14:11:19.557104 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 14:11:19.553000 audit[5975]: CRED_DISP pid=5975 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:19.579301 systemd-logind[1575]: Session 23 logged out. Waiting for processes to exit. Dec 13 14:11:19.600171 kernel: audit: type=1106 audit(1734099079.552:558): pid=5975 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:19.600266 kernel: audit: type=1104 audit(1734099079.553:559): pid=5975 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:19.553000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.36:22-10.200.16.10:38506 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:11:19.600992 systemd-logind[1575]: Removed session 23. Dec 13 14:11:23.499546 update_engine[1579]: I1213 14:11:23.499119 1579 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 14:11:23.499546 update_engine[1579]: I1213 14:11:23.499321 1579 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 14:11:23.499546 update_engine[1579]: I1213 14:11:23.499512 1579 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 14:11:23.533506 update_engine[1579]: E1213 14:11:23.533372 1579 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 14:11:23.533506 update_engine[1579]: I1213 14:11:23.533476 1579 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Dec 13 14:11:24.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.36:22-10.200.16.10:38514 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:11:24.627503 systemd[1]: Started sshd@21-10.200.20.36:22-10.200.16.10:38514.service. Dec 13 14:11:24.632361 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 14:11:24.632444 kernel: audit: type=1130 audit(1734099084.627:561): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.36:22-10.200.16.10:38514 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:11:25.060000 audit[5987]: USER_ACCT pid=5987 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:25.061010 sshd[5987]: Accepted publickey for core from 10.200.16.10 port 38514 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:11:25.084631 kernel: audit: type=1101 audit(1734099085.060:562): pid=5987 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:25.084000 audit[5987]: CRED_ACQ pid=5987 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:25.085939 sshd[5987]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:11:25.121206 kernel: audit: type=1103 audit(1734099085.084:563): pid=5987 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:25.121390 kernel: audit: type=1006 audit(1734099085.084:564): pid=5987 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Dec 13 14:11:25.084000 audit[5987]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff0a7ba80 a2=3 a3=1 items=0 ppid=1 pid=5987 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:11:25.126103 systemd[1]: Started session-24.scope. Dec 13 14:11:25.127387 systemd-logind[1575]: New session 24 of user core. Dec 13 14:11:25.145231 kernel: audit: type=1300 audit(1734099085.084:564): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff0a7ba80 a2=3 a3=1 items=0 ppid=1 pid=5987 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:11:25.084000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:11:25.153891 kernel: audit: type=1327 audit(1734099085.084:564): proctitle=737368643A20636F7265205B707269765D Dec 13 14:11:25.154210 kernel: audit: type=1105 audit(1734099085.131:565): pid=5987 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:25.131000 audit[5987]: USER_START pid=5987 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:25.133000 audit[5990]: CRED_ACQ pid=5990 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:25.201328 kernel: audit: type=1103 audit(1734099085.133:566): pid=5990 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:25.483560 sshd[5987]: pam_unix(sshd:session): session closed for user core Dec 13 14:11:25.483000 audit[5987]: USER_END pid=5987 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:25.487441 systemd[1]: sshd@21-10.200.20.36:22-10.200.16.10:38514.service: Deactivated successfully. Dec 13 14:11:25.488372 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 14:11:25.485000 audit[5987]: CRED_DISP pid=5987 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:25.514854 systemd-logind[1575]: Session 24 logged out. Waiting for processes to exit. Dec 13 14:11:25.515837 systemd-logind[1575]: Removed session 24. Dec 13 14:11:25.537838 kernel: audit: type=1106 audit(1734099085.483:567): pid=5987 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:25.537997 kernel: audit: type=1104 audit(1734099085.485:568): pid=5987 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:25.487000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.36:22-10.200.16.10:38514 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:11:30.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.36:22-10.200.16.10:38248 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:11:30.561541 systemd[1]: Started sshd@22-10.200.20.36:22-10.200.16.10:38248.service. Dec 13 14:11:30.571621 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 14:11:30.571725 kernel: audit: type=1130 audit(1734099090.561:570): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.36:22-10.200.16.10:38248 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:11:30.996000 audit[6003]: USER_ACCT pid=6003 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:30.997885 sshd[6003]: Accepted publickey for core from 10.200.16.10 port 38248 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:11:31.021634 kernel: audit: type=1101 audit(1734099090.996:571): pid=6003 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:31.022000 audit[6003]: CRED_ACQ pid=6003 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:31.023388 sshd[6003]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:11:31.048988 systemd[1]: Started session-25.scope. Dec 13 14:11:31.050241 systemd-logind[1575]: New session 25 of user core. Dec 13 14:11:31.059628 kernel: audit: type=1103 audit(1734099091.022:572): pid=6003 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:31.059752 kernel: audit: type=1006 audit(1734099091.022:573): pid=6003 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Dec 13 14:11:31.022000 audit[6003]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff7f04ff0 a2=3 a3=1 items=0 ppid=1 pid=6003 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:11:31.085796 kernel: audit: type=1300 audit(1734099091.022:573): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff7f04ff0 a2=3 a3=1 items=0 ppid=1 pid=6003 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:11:31.022000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:11:31.094615 kernel: audit: type=1327 audit(1734099091.022:573): proctitle=737368643A20636F7265205B707269765D Dec 13 14:11:31.054000 audit[6003]: USER_START pid=6003 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:31.056000 audit[6006]: CRED_ACQ pid=6006 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:31.141176 kernel: audit: type=1105 audit(1734099091.054:574): pid=6003 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:31.141279 kernel: audit: type=1103 audit(1734099091.056:575): pid=6006 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:31.419828 sshd[6003]: pam_unix(sshd:session): session closed for user core Dec 13 14:11:31.420000 audit[6003]: USER_END pid=6003 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:31.423136 systemd-logind[1575]: Session 25 logged out. Waiting for processes to exit. Dec 13 14:11:31.424755 systemd[1]: sshd@22-10.200.20.36:22-10.200.16.10:38248.service: Deactivated successfully. Dec 13 14:11:31.425587 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 14:11:31.427093 systemd-logind[1575]: Removed session 25. Dec 13 14:11:31.420000 audit[6003]: CRED_DISP pid=6003 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:31.475411 kernel: audit: type=1106 audit(1734099091.420:576): pid=6003 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:31.475556 kernel: audit: type=1104 audit(1734099091.420:577): pid=6003 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:31.424000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.36:22-10.200.16.10:38248 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:11:33.497404 update_engine[1579]: I1213 14:11:33.497355 1579 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 14:11:33.498263 update_engine[1579]: I1213 14:11:33.497563 1579 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 14:11:33.498263 update_engine[1579]: I1213 14:11:33.497788 1579 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 14:11:33.608237 update_engine[1579]: E1213 14:11:33.608197 1579 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 14:11:33.608394 update_engine[1579]: I1213 14:11:33.608304 1579 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 13 14:11:33.608394 update_engine[1579]: I1213 14:11:33.608309 1579 omaha_request_action.cc:621] Omaha request response: Dec 13 14:11:33.608394 update_engine[1579]: E1213 14:11:33.608389 1579 omaha_request_action.cc:640] Omaha request network transfer failed. Dec 13 14:11:33.608462 update_engine[1579]: I1213 14:11:33.608403 1579 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Dec 13 14:11:33.608462 update_engine[1579]: I1213 14:11:33.608407 1579 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 14:11:33.608462 update_engine[1579]: I1213 14:11:33.608410 1579 update_attempter.cc:306] Processing Done. Dec 13 14:11:33.608462 update_engine[1579]: E1213 14:11:33.608422 1579 update_attempter.cc:619] Update failed. Dec 13 14:11:33.608462 update_engine[1579]: I1213 14:11:33.608426 1579 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Dec 13 14:11:33.608462 update_engine[1579]: I1213 14:11:33.608429 1579 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Dec 13 14:11:33.608462 update_engine[1579]: I1213 14:11:33.608433 1579 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Dec 13 14:11:33.608719 update_engine[1579]: I1213 14:11:33.608494 1579 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 13 14:11:33.608719 update_engine[1579]: I1213 14:11:33.608512 1579 omaha_request_action.cc:270] Posting an Omaha request to disabled Dec 13 14:11:33.608719 update_engine[1579]: I1213 14:11:33.608515 1579 omaha_request_action.cc:271] Request: Dec 13 14:11:33.608719 update_engine[1579]: Dec 13 14:11:33.608719 update_engine[1579]: Dec 13 14:11:33.608719 update_engine[1579]: Dec 13 14:11:33.608719 update_engine[1579]: Dec 13 14:11:33.608719 update_engine[1579]: Dec 13 14:11:33.608719 update_engine[1579]: Dec 13 14:11:33.608719 update_engine[1579]: I1213 14:11:33.608520 1579 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 14:11:33.608719 update_engine[1579]: I1213 14:11:33.608669 1579 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 14:11:33.608938 update_engine[1579]: I1213 14:11:33.608846 1579 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 14:11:33.609123 locksmithd[1686]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Dec 13 14:11:33.709752 update_engine[1579]: E1213 14:11:33.709713 1579 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 14:11:33.709900 update_engine[1579]: I1213 14:11:33.709822 1579 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 13 14:11:33.709900 update_engine[1579]: I1213 14:11:33.709828 1579 omaha_request_action.cc:621] Omaha request response: Dec 13 14:11:33.709900 update_engine[1579]: I1213 14:11:33.709833 1579 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 14:11:33.709900 update_engine[1579]: I1213 14:11:33.709836 1579 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 14:11:33.709900 update_engine[1579]: I1213 14:11:33.709839 1579 update_attempter.cc:306] Processing Done. Dec 13 14:11:33.709900 update_engine[1579]: I1213 14:11:33.709843 1579 update_attempter.cc:310] Error event sent. Dec 13 14:11:33.709900 update_engine[1579]: I1213 14:11:33.709852 1579 update_check_scheduler.cc:74] Next update check in 45m15s Dec 13 14:11:33.710172 locksmithd[1686]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Dec 13 14:11:36.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.36:22-10.200.16.10:38262 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:11:36.488204 systemd[1]: Started sshd@23-10.200.20.36:22-10.200.16.10:38262.service. Dec 13 14:11:36.493362 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 14:11:36.493529 kernel: audit: type=1130 audit(1734099096.487:579): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.36:22-10.200.16.10:38262 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:11:36.920000 audit[6041]: USER_ACCT pid=6041 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:36.921566 sshd[6041]: Accepted publickey for core from 10.200.16.10 port 38262 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:11:36.945640 kernel: audit: type=1101 audit(1734099096.920:580): pid=6041 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:36.945763 kernel: audit: type=1103 audit(1734099096.944:581): pid=6041 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:36.944000 audit[6041]: CRED_ACQ pid=6041 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:36.948801 sshd[6041]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:11:36.982297 kernel: audit: type=1006 audit(1734099096.945:582): pid=6041 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Dec 13 14:11:36.945000 audit[6041]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdcf88300 a2=3 a3=1 items=0 ppid=1 pid=6041 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:11:37.006110 kernel: audit: type=1300 audit(1734099096.945:582): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdcf88300 a2=3 a3=1 items=0 ppid=1 pid=6041 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:11:36.987400 systemd[1]: Started session-26.scope. Dec 13 14:11:36.987581 systemd-logind[1575]: New session 26 of user core. Dec 13 14:11:36.945000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:11:37.015000 audit[6041]: USER_START pid=6041 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:37.041503 kernel: audit: type=1327 audit(1734099096.945:582): proctitle=737368643A20636F7265205B707269765D Dec 13 14:11:37.041618 kernel: audit: type=1105 audit(1734099097.015:583): pid=6041 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:37.017000 audit[6044]: CRED_ACQ pid=6044 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:37.062806 kernel: audit: type=1103 audit(1734099097.017:584): pid=6044 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:37.330475 sshd[6041]: pam_unix(sshd:session): session closed for user core Dec 13 14:11:37.331000 audit[6041]: USER_END pid=6041 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:37.335004 systemd[1]: sshd@23-10.200.20.36:22-10.200.16.10:38262.service: Deactivated successfully. Dec 13 14:11:37.335900 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 14:11:37.332000 audit[6041]: CRED_DISP pid=6041 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:37.379560 kernel: audit: type=1106 audit(1734099097.331:585): pid=6041 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:37.379758 kernel: audit: type=1104 audit(1734099097.332:586): pid=6041 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:37.334000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.36:22-10.200.16.10:38262 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:11:37.381338 systemd-logind[1575]: Session 26 logged out. Waiting for processes to exit. Dec 13 14:11:37.382196 systemd-logind[1575]: Removed session 26. Dec 13 14:11:42.425968 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 14:11:42.426078 kernel: audit: type=1130 audit(1734099102.399:588): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.20.36:22-10.200.16.10:39412 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:11:42.399000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.20.36:22-10.200.16.10:39412 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:11:42.399677 systemd[1]: Started sshd@24-10.200.20.36:22-10.200.16.10:39412.service. Dec 13 14:11:42.823000 audit[6056]: USER_ACCT pid=6056 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:42.824398 sshd[6056]: Accepted publickey for core from 10.200.16.10 port 39412 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:11:42.826231 sshd[6056]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:11:42.831372 systemd[1]: Started session-27.scope. Dec 13 14:11:42.832485 systemd-logind[1575]: New session 27 of user core. Dec 13 14:11:42.825000 audit[6056]: CRED_ACQ pid=6056 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:42.869442 kernel: audit: type=1101 audit(1734099102.823:589): pid=6056 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:42.869580 kernel: audit: type=1103 audit(1734099102.825:590): pid=6056 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:42.883806 kernel: audit: type=1006 audit(1734099102.825:591): pid=6056 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Dec 13 14:11:42.883931 kernel: audit: type=1300 audit(1734099102.825:591): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff696ecc0 a2=3 a3=1 items=0 ppid=1 pid=6056 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:11:42.825000 audit[6056]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff696ecc0 a2=3 a3=1 items=0 ppid=1 pid=6056 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:11:42.825000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:11:42.914732 kernel: audit: type=1327 audit(1734099102.825:591): proctitle=737368643A20636F7265205B707269765D Dec 13 14:11:42.914852 kernel: audit: type=1105 audit(1734099102.836:592): pid=6056 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:42.836000 audit[6056]: USER_START pid=6056 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:42.838000 audit[6058]: CRED_ACQ pid=6058 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:42.961072 kernel: audit: type=1103 audit(1734099102.838:593): pid=6058 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:43.200807 sshd[6056]: pam_unix(sshd:session): session closed for user core Dec 13 14:11:43.201000 audit[6056]: USER_END pid=6056 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:43.205697 systemd-logind[1575]: Session 27 logged out. Waiting for processes to exit. Dec 13 14:11:43.207191 systemd[1]: sshd@24-10.200.20.36:22-10.200.16.10:39412.service: Deactivated successfully. Dec 13 14:11:43.208170 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 14:11:43.209894 systemd-logind[1575]: Removed session 27. Dec 13 14:11:43.201000 audit[6056]: CRED_DISP pid=6056 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:43.248531 kernel: audit: type=1106 audit(1734099103.201:594): pid=6056 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:43.248697 kernel: audit: type=1104 audit(1734099103.201:595): pid=6056 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:11:43.206000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.20.36:22-10.200.16.10:39412 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'