Jun 25 14:52:36.141164 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jun 25 14:52:36.141232 kernel: Linux version 6.1.95-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20230826 p7) 13.2.1 20230826, GNU ld (Gentoo 2.40 p5) 2.40.0) #1 SMP PREEMPT Tue Jun 25 13:19:44 -00 2024 Jun 25 14:52:36.141241 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jun 25 14:52:36.141249 kernel: printk: bootconsole [pl11] enabled Jun 25 14:52:36.141255 kernel: efi: EFI v2.70 by EDK II Jun 25 14:52:36.141261 kernel: efi: ACPI 2.0=0x3fd89018 SMBIOS=0x3fd66000 SMBIOS 3.0=0x3fd64000 MEMATTR=0x3ef3c198 RNG=0x3fd89998 MEMRESERVE=0x3e94ae18 Jun 25 14:52:36.141267 kernel: random: crng init done Jun 25 14:52:36.141273 kernel: ACPI: Early table checksum verification disabled Jun 25 14:52:36.141278 kernel: ACPI: RSDP 0x000000003FD89018 000024 (v02 VRTUAL) Jun 25 14:52:36.141284 kernel: ACPI: XSDT 0x000000003FD89F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 14:52:36.141290 kernel: ACPI: FACP 0x000000003FD89C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 14:52:36.141297 kernel: ACPI: DSDT 0x000000003EBD2018 01DEC0 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jun 25 14:52:36.141302 kernel: ACPI: DBG2 0x000000003FD89B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 14:52:36.141308 kernel: ACPI: GTDT 0x000000003FD89D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 14:52:36.141327 kernel: ACPI: OEM0 0x000000003FD89098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 14:52:36.141333 kernel: ACPI: SPCR 0x000000003FD89A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 14:52:36.141339 kernel: ACPI: APIC 0x000000003FD89818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 14:52:36.141347 kernel: ACPI: SRAT 0x000000003FD89198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 14:52:36.141353 kernel: ACPI: PPTT 0x000000003FD89418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jun 25 14:52:36.141359 kernel: ACPI: BGRT 0x000000003FD89E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 14:52:36.141365 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jun 25 14:52:36.141371 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jun 25 14:52:36.141376 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jun 25 14:52:36.141384 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jun 25 14:52:36.141389 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jun 25 14:52:36.141396 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jun 25 14:52:36.141402 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jun 25 14:52:36.141409 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jun 25 14:52:36.141415 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jun 25 14:52:36.141420 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jun 25 14:52:36.141427 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jun 25 14:52:36.141432 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jun 25 14:52:36.141439 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jun 25 14:52:36.141445 kernel: NUMA: NODE_DATA [mem 0x1bf7ec800-0x1bf7f1fff] Jun 25 14:52:36.141451 kernel: Zone ranges: Jun 25 14:52:36.141457 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jun 25 14:52:36.141462 kernel: DMA32 empty Jun 25 14:52:36.141468 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jun 25 14:52:36.141474 kernel: Movable zone start for each node Jun 25 14:52:36.141481 kernel: Early memory node ranges Jun 25 14:52:36.141490 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jun 25 14:52:36.141496 kernel: node 0: [mem 0x0000000000824000-0x000000003ec80fff] Jun 25 14:52:36.141503 kernel: node 0: [mem 0x000000003ec81000-0x000000003eca9fff] Jun 25 14:52:36.141509 kernel: node 0: [mem 0x000000003ecaa000-0x000000003fd29fff] Jun 25 14:52:36.141517 kernel: node 0: [mem 0x000000003fd2a000-0x000000003fd7dfff] Jun 25 14:52:36.141523 kernel: node 0: [mem 0x000000003fd7e000-0x000000003fd89fff] Jun 25 14:52:36.141530 kernel: node 0: [mem 0x000000003fd8a000-0x000000003fd8dfff] Jun 25 14:52:36.141536 kernel: node 0: [mem 0x000000003fd8e000-0x000000003fffffff] Jun 25 14:52:36.141542 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jun 25 14:52:36.141549 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jun 25 14:52:36.141556 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jun 25 14:52:36.141562 kernel: psci: probing for conduit method from ACPI. Jun 25 14:52:36.141568 kernel: psci: PSCIv1.1 detected in firmware. Jun 25 14:52:36.141574 kernel: psci: Using standard PSCI v0.2 function IDs Jun 25 14:52:36.141580 kernel: psci: MIGRATE_INFO_TYPE not supported. Jun 25 14:52:36.141586 kernel: psci: SMC Calling Convention v1.4 Jun 25 14:52:36.141594 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jun 25 14:52:36.141601 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jun 25 14:52:36.141607 kernel: percpu: Embedded 30 pages/cpu s83880 r8192 d30808 u122880 Jun 25 14:52:36.141613 kernel: pcpu-alloc: s83880 r8192 d30808 u122880 alloc=30*4096 Jun 25 14:52:36.141620 kernel: pcpu-alloc: [0] 0 [0] 1 Jun 25 14:52:36.141626 kernel: Detected PIPT I-cache on CPU0 Jun 25 14:52:36.141632 kernel: CPU features: detected: GIC system register CPU interface Jun 25 14:52:36.141638 kernel: CPU features: detected: Hardware dirty bit management Jun 25 14:52:36.141644 kernel: CPU features: detected: Spectre-BHB Jun 25 14:52:36.141650 kernel: CPU features: kernel page table isolation forced ON by KASLR Jun 25 14:52:36.141656 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jun 25 14:52:36.141664 kernel: CPU features: detected: ARM erratum 1418040 Jun 25 14:52:36.141670 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jun 25 14:52:36.141676 kernel: alternatives: applying boot alternatives Jun 25 14:52:36.141682 kernel: Fallback order for Node 0: 0 Jun 25 14:52:36.141689 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jun 25 14:52:36.141695 kernel: Policy zone: Normal Jun 25 14:52:36.141702 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=db17b63e45e8142dc1ecd7dada86314b84dd868576326a7134a62617b1dac6e8 Jun 25 14:52:36.141709 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 25 14:52:36.141715 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 25 14:52:36.141721 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 25 14:52:36.141727 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 25 14:52:36.141735 kernel: software IO TLB: area num 2. Jun 25 14:52:36.141741 kernel: software IO TLB: mapped [mem 0x000000003a94a000-0x000000003e94a000] (64MB) Jun 25 14:52:36.141748 kernel: Memory: 3991388K/4194160K available (9984K kernel code, 2108K rwdata, 7720K rodata, 34688K init, 894K bss, 202772K reserved, 0K cma-reserved) Jun 25 14:52:36.141754 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 25 14:52:36.141760 kernel: trace event string verifier disabled Jun 25 14:52:36.141766 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 25 14:52:36.141773 kernel: rcu: RCU event tracing is enabled. Jun 25 14:52:36.141780 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 25 14:52:36.141786 kernel: Trampoline variant of Tasks RCU enabled. Jun 25 14:52:36.141792 kernel: Tracing variant of Tasks RCU enabled. Jun 25 14:52:36.141798 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 25 14:52:36.141806 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 25 14:52:36.141812 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jun 25 14:52:36.141818 kernel: GICv3: 960 SPIs implemented Jun 25 14:52:36.141825 kernel: GICv3: 0 Extended SPIs implemented Jun 25 14:52:36.141831 kernel: Root IRQ handler: gic_handle_irq Jun 25 14:52:36.141837 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jun 25 14:52:36.141843 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jun 25 14:52:36.141849 kernel: ITS: No ITS available, not enabling LPIs Jun 25 14:52:36.141855 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 25 14:52:36.141862 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 25 14:52:36.141868 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jun 25 14:52:36.141875 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jun 25 14:52:36.141882 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jun 25 14:52:36.141889 kernel: Console: colour dummy device 80x25 Jun 25 14:52:36.141895 kernel: printk: console [tty1] enabled Jun 25 14:52:36.141902 kernel: ACPI: Core revision 20220331 Jun 25 14:52:36.141908 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jun 25 14:52:36.141915 kernel: pid_max: default: 32768 minimum: 301 Jun 25 14:52:36.141921 kernel: LSM: Security Framework initializing Jun 25 14:52:36.141928 kernel: SELinux: Initializing. Jun 25 14:52:36.141934 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 25 14:52:36.141942 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 25 14:52:36.141948 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jun 25 14:52:36.141955 kernel: cblist_init_generic: Setting shift to 1 and lim to 1. Jun 25 14:52:36.141962 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jun 25 14:52:36.141968 kernel: cblist_init_generic: Setting shift to 1 and lim to 1. Jun 25 14:52:36.141974 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Jun 25 14:52:36.141981 kernel: Hyper-V: Host Build 10.0.22477.1369-1-0 Jun 25 14:52:36.141987 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jun 25 14:52:36.141999 kernel: rcu: Hierarchical SRCU implementation. Jun 25 14:52:36.142006 kernel: rcu: Max phase no-delay instances is 400. Jun 25 14:52:36.142013 kernel: Remapping and enabling EFI services. Jun 25 14:52:36.142019 kernel: smp: Bringing up secondary CPUs ... Jun 25 14:52:36.142027 kernel: Detected PIPT I-cache on CPU1 Jun 25 14:52:36.142034 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jun 25 14:52:36.142041 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 25 14:52:36.142048 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jun 25 14:52:36.142054 kernel: smp: Brought up 1 node, 2 CPUs Jun 25 14:52:36.142062 kernel: SMP: Total of 2 processors activated. Jun 25 14:52:36.142069 kernel: CPU features: detected: 32-bit EL0 Support Jun 25 14:52:36.142075 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jun 25 14:52:36.142082 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jun 25 14:52:36.142089 kernel: CPU features: detected: CRC32 instructions Jun 25 14:52:36.142096 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jun 25 14:52:36.142102 kernel: CPU features: detected: LSE atomic instructions Jun 25 14:52:36.142109 kernel: CPU features: detected: Privileged Access Never Jun 25 14:52:36.142116 kernel: CPU: All CPU(s) started at EL1 Jun 25 14:52:36.142124 kernel: alternatives: applying system-wide alternatives Jun 25 14:52:36.142134 kernel: devtmpfs: initialized Jun 25 14:52:36.142141 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 25 14:52:36.142148 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 25 14:52:36.142155 kernel: pinctrl core: initialized pinctrl subsystem Jun 25 14:52:36.142161 kernel: SMBIOS 3.1.0 present. Jun 25 14:52:36.142168 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 11/28/2023 Jun 25 14:52:36.142175 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 25 14:52:36.142189 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jun 25 14:52:36.142197 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jun 25 14:52:36.142204 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jun 25 14:52:36.142211 kernel: audit: initializing netlink subsys (disabled) Jun 25 14:52:36.142218 kernel: audit: type=2000 audit(0.048:1): state=initialized audit_enabled=0 res=1 Jun 25 14:52:36.142224 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 25 14:52:36.142231 kernel: cpuidle: using governor menu Jun 25 14:52:36.142237 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jun 25 14:52:36.142244 kernel: ASID allocator initialised with 32768 entries Jun 25 14:52:36.142251 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 25 14:52:36.142259 kernel: Serial: AMBA PL011 UART driver Jun 25 14:52:36.142266 kernel: KASLR enabled Jun 25 14:52:36.142273 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 25 14:52:36.142279 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jun 25 14:52:36.142286 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jun 25 14:52:36.142293 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jun 25 14:52:36.142299 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 25 14:52:36.142306 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jun 25 14:52:36.142313 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jun 25 14:52:36.142320 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jun 25 14:52:36.142327 kernel: ACPI: Added _OSI(Module Device) Jun 25 14:52:36.142334 kernel: ACPI: Added _OSI(Processor Device) Jun 25 14:52:36.142340 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jun 25 14:52:36.142347 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 25 14:52:36.142354 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 25 14:52:36.142360 kernel: ACPI: Interpreter enabled Jun 25 14:52:36.142367 kernel: ACPI: Using GIC for interrupt routing Jun 25 14:52:36.142374 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jun 25 14:52:36.142382 kernel: printk: console [ttyAMA0] enabled Jun 25 14:52:36.142388 kernel: printk: bootconsole [pl11] disabled Jun 25 14:52:36.142395 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jun 25 14:52:36.142402 kernel: iommu: Default domain type: Translated Jun 25 14:52:36.142408 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jun 25 14:52:36.142415 kernel: pps_core: LinuxPPS API ver. 1 registered Jun 25 14:52:36.142422 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jun 25 14:52:36.142429 kernel: PTP clock support registered Jun 25 14:52:36.142435 kernel: Registered efivars operations Jun 25 14:52:36.142444 kernel: No ACPI PMU IRQ for CPU0 Jun 25 14:52:36.142451 kernel: No ACPI PMU IRQ for CPU1 Jun 25 14:52:36.142457 kernel: vgaarb: loaded Jun 25 14:52:36.142464 kernel: clocksource: Switched to clocksource arch_sys_counter Jun 25 14:52:36.142471 kernel: VFS: Disk quotas dquot_6.6.0 Jun 25 14:52:36.142477 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 25 14:52:36.142484 kernel: pnp: PnP ACPI init Jun 25 14:52:36.142491 kernel: pnp: PnP ACPI: found 0 devices Jun 25 14:52:36.142498 kernel: NET: Registered PF_INET protocol family Jun 25 14:52:36.142506 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 25 14:52:36.142513 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jun 25 14:52:36.142519 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 25 14:52:36.142526 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 25 14:52:36.142533 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jun 25 14:52:36.142540 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jun 25 14:52:36.142546 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 25 14:52:36.142553 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 25 14:52:36.142560 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 25 14:52:36.142568 kernel: PCI: CLS 0 bytes, default 64 Jun 25 14:52:36.142575 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Jun 25 14:52:36.142581 kernel: kvm [1]: HYP mode not available Jun 25 14:52:36.142588 kernel: Initialise system trusted keyrings Jun 25 14:52:36.142595 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jun 25 14:52:36.142602 kernel: Key type asymmetric registered Jun 25 14:52:36.142608 kernel: Asymmetric key parser 'x509' registered Jun 25 14:52:36.142615 kernel: alg: self-tests for CTR-KDF (hmac(sha256)) passed Jun 25 14:52:36.142621 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jun 25 14:52:36.142629 kernel: io scheduler mq-deadline registered Jun 25 14:52:36.142636 kernel: io scheduler kyber registered Jun 25 14:52:36.142643 kernel: io scheduler bfq registered Jun 25 14:52:36.142649 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 25 14:52:36.142656 kernel: thunder_xcv, ver 1.0 Jun 25 14:52:36.142663 kernel: thunder_bgx, ver 1.0 Jun 25 14:52:36.142669 kernel: nicpf, ver 1.0 Jun 25 14:52:36.142676 kernel: nicvf, ver 1.0 Jun 25 14:52:36.142799 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jun 25 14:52:36.142865 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-06-25T14:52:35 UTC (1719327155) Jun 25 14:52:36.142875 kernel: efifb: probing for efifb Jun 25 14:52:36.142882 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jun 25 14:52:36.142889 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jun 25 14:52:36.142896 kernel: efifb: scrolling: redraw Jun 25 14:52:36.142902 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jun 25 14:52:36.142909 kernel: Console: switching to colour frame buffer device 128x48 Jun 25 14:52:36.142916 kernel: fb0: EFI VGA frame buffer device Jun 25 14:52:36.142925 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jun 25 14:52:36.142932 kernel: hid: raw HID events driver (C) Jiri Kosina Jun 25 14:52:36.142939 kernel: NET: Registered PF_INET6 protocol family Jun 25 14:52:36.142946 kernel: Segment Routing with IPv6 Jun 25 14:52:36.142952 kernel: In-situ OAM (IOAM) with IPv6 Jun 25 14:52:36.142959 kernel: NET: Registered PF_PACKET protocol family Jun 25 14:52:36.142966 kernel: Key type dns_resolver registered Jun 25 14:52:36.142972 kernel: registered taskstats version 1 Jun 25 14:52:36.142979 kernel: Loading compiled-in X.509 certificates Jun 25 14:52:36.142988 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.1.95-flatcar: 0fa2e892f90caac26ef50b6d7e7f5c106b0c7e83' Jun 25 14:52:36.142994 kernel: Key type .fscrypt registered Jun 25 14:52:36.143001 kernel: Key type fscrypt-provisioning registered Jun 25 14:52:36.143010 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 25 14:52:36.143017 kernel: ima: Allocated hash algorithm: sha1 Jun 25 14:52:36.143024 kernel: ima: No architecture policies found Jun 25 14:52:36.143031 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jun 25 14:52:36.143037 kernel: clk: Disabling unused clocks Jun 25 14:52:36.143044 kernel: Freeing unused kernel memory: 34688K Jun 25 14:52:36.143053 kernel: Run /init as init process Jun 25 14:52:36.143060 kernel: with arguments: Jun 25 14:52:36.143066 kernel: /init Jun 25 14:52:36.143073 kernel: with environment: Jun 25 14:52:36.143079 kernel: HOME=/ Jun 25 14:52:36.143086 kernel: TERM=linux Jun 25 14:52:36.143092 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 25 14:52:36.143101 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jun 25 14:52:36.143111 systemd[1]: Detected virtualization microsoft. Jun 25 14:52:36.143119 systemd[1]: Detected architecture arm64. Jun 25 14:52:36.143126 systemd[1]: Running in initrd. Jun 25 14:52:36.143133 systemd[1]: No hostname configured, using default hostname. Jun 25 14:52:36.143140 systemd[1]: Hostname set to . Jun 25 14:52:36.143148 systemd[1]: Initializing machine ID from random generator. Jun 25 14:52:36.143155 systemd[1]: Queued start job for default target initrd.target. Jun 25 14:52:36.143163 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 14:52:36.143172 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 14:52:36.143189 systemd[1]: Reached target paths.target - Path Units. Jun 25 14:52:36.143198 systemd[1]: Reached target slices.target - Slice Units. Jun 25 14:52:36.143205 systemd[1]: Reached target swap.target - Swaps. Jun 25 14:52:36.158881 systemd[1]: Reached target timers.target - Timer Units. Jun 25 14:52:36.158897 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 14:52:36.158905 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 14:52:36.158921 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jun 25 14:52:36.158928 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 25 14:52:36.158936 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 25 14:52:36.158944 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 14:52:36.158951 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 14:52:36.158959 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 14:52:36.158967 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 14:52:36.158974 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 14:52:36.158982 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 25 14:52:36.158991 systemd[1]: Starting systemd-fsck-usr.service... Jun 25 14:52:36.158998 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 14:52:36.159006 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 14:52:36.159013 systemd[1]: Starting systemd-vconsole-setup.service - Setup Virtual Console... Jun 25 14:52:36.159021 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 25 14:52:36.159034 systemd-journald[208]: Journal started Jun 25 14:52:36.159100 systemd-journald[208]: Runtime Journal (/run/log/journal/5f77268c1b5a42afbd925f3a81a64c0f) is 8.0M, max 78.6M, 70.6M free. Jun 25 14:52:36.136429 systemd-modules-load[209]: Inserted module 'overlay' Jun 25 14:52:36.186850 kernel: Bridge firewalling registered Jun 25 14:52:36.186870 kernel: SCSI subsystem initialized Jun 25 14:52:36.186887 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 14:52:36.186899 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 25 14:52:36.167628 systemd-modules-load[209]: Inserted module 'br_netfilter' Jun 25 14:52:36.216052 kernel: device-mapper: uevent: version 1.0.3 Jun 25 14:52:36.216073 kernel: device-mapper: ioctl: 4.47.0-ioctl (2022-07-28) initialised: dm-devel@redhat.com Jun 25 14:52:36.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:36.220273 systemd-modules-load[209]: Inserted module 'dm_multipath' Jun 25 14:52:36.257514 kernel: audit: type=1130 audit(1719327156.216:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:36.257536 kernel: audit: type=1130 audit(1719327156.236:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:36.236000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:36.222577 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 14:52:36.281766 kernel: audit: type=1130 audit(1719327156.261:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:36.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:36.237118 systemd[1]: Finished systemd-fsck-usr.service. Jun 25 14:52:36.306340 kernel: audit: type=1130 audit(1719327156.287:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:36.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:36.261829 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 14:52:36.334585 kernel: audit: type=1130 audit(1719327156.313:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:36.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:36.288299 systemd[1]: Finished systemd-vconsole-setup.service - Setup Virtual Console. Jun 25 14:52:36.341374 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 14:52:36.347375 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 14:52:36.355711 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 14:52:36.380323 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 14:52:36.392001 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 14:52:36.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:36.399161 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 14:52:36.448280 kernel: audit: type=1130 audit(1719327156.398:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:36.448304 kernel: audit: type=1130 audit(1719327156.428:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:36.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:36.428596 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 14:52:36.454169 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 14:52:36.490500 kernel: audit: type=1130 audit(1719327156.453:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:36.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:36.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:36.505916 kernel: audit: type=1130 audit(1719327156.483:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:36.507708 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 25 14:52:36.518000 audit: BPF prog-id=6 op=LOAD Jun 25 14:52:36.519753 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 14:52:36.546391 dracut-cmdline[232]: dracut-dracut-053 Jun 25 14:52:36.552143 dracut-cmdline[232]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=db17b63e45e8142dc1ecd7dada86314b84dd868576326a7134a62617b1dac6e8 Jun 25 14:52:36.555944 systemd-resolved[237]: Positive Trust Anchors: Jun 25 14:52:36.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:36.555951 systemd-resolved[237]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 14:52:36.555978 systemd-resolved[237]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jun 25 14:52:36.558287 systemd-resolved[237]: Defaulting to hostname 'linux'. Jun 25 14:52:36.583594 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 14:52:36.590351 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 14:52:36.675216 kernel: Loading iSCSI transport class v2.0-870. Jun 25 14:52:36.686213 kernel: iscsi: registered transport (tcp) Jun 25 14:52:36.704884 kernel: iscsi: registered transport (qla4xxx) Jun 25 14:52:36.704939 kernel: QLogic iSCSI HBA Driver Jun 25 14:52:36.743526 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 25 14:52:36.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:36.759597 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 25 14:52:36.820221 kernel: raid6: neonx8 gen() 15736 MB/s Jun 25 14:52:36.840195 kernel: raid6: neonx4 gen() 15656 MB/s Jun 25 14:52:36.861193 kernel: raid6: neonx2 gen() 13214 MB/s Jun 25 14:52:36.881196 kernel: raid6: neonx1 gen() 10491 MB/s Jun 25 14:52:36.901192 kernel: raid6: int64x8 gen() 6982 MB/s Jun 25 14:52:36.922197 kernel: raid6: int64x4 gen() 7333 MB/s Jun 25 14:52:36.942192 kernel: raid6: int64x2 gen() 6133 MB/s Jun 25 14:52:36.965384 kernel: raid6: int64x1 gen() 5058 MB/s Jun 25 14:52:36.965394 kernel: raid6: using algorithm neonx8 gen() 15736 MB/s Jun 25 14:52:36.990481 kernel: raid6: .... xor() 11909 MB/s, rmw enabled Jun 25 14:52:36.990497 kernel: raid6: using neon recovery algorithm Jun 25 14:52:37.002463 kernel: xor: measuring software checksum speed Jun 25 14:52:37.002486 kernel: 8regs : 19873 MB/sec Jun 25 14:52:37.010065 kernel: 32regs : 19659 MB/sec Jun 25 14:52:37.010086 kernel: arm64_neon : 27072 MB/sec Jun 25 14:52:37.014027 kernel: xor: using function: arm64_neon (27072 MB/sec) Jun 25 14:52:37.070202 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Jun 25 14:52:37.081411 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 25 14:52:37.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:37.091000 audit: BPF prog-id=7 op=LOAD Jun 25 14:52:37.091000 audit: BPF prog-id=8 op=LOAD Jun 25 14:52:37.094379 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 14:52:37.121662 systemd-udevd[409]: Using default interface naming scheme 'v252'. Jun 25 14:52:37.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:37.128317 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 14:52:37.152304 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 25 14:52:37.163886 dracut-pre-trigger[430]: rd.md=0: removing MD RAID activation Jun 25 14:52:37.188399 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 14:52:37.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:37.200643 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 14:52:37.232448 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 14:52:37.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:37.288216 kernel: hv_vmbus: Vmbus version:5.3 Jun 25 14:52:37.298109 kernel: hv_vmbus: registering driver hyperv_keyboard Jun 25 14:52:37.298168 kernel: hv_vmbus: registering driver hid_hyperv Jun 25 14:52:37.298178 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jun 25 14:52:37.316438 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jun 25 14:52:37.316489 kernel: hv_vmbus: registering driver hv_netvsc Jun 25 14:52:37.327967 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jun 25 14:52:37.339202 kernel: hv_vmbus: registering driver hv_storvsc Jun 25 14:52:37.347742 kernel: scsi host0: storvsc_host_t Jun 25 14:52:37.347929 kernel: scsi host1: storvsc_host_t Jun 25 14:52:37.347953 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jun 25 14:52:37.361214 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jun 25 14:52:37.378869 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jun 25 14:52:37.380076 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jun 25 14:52:37.380089 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jun 25 14:52:37.404467 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jun 25 14:52:37.433442 kernel: hv_netvsc 002248ba-d811-0022-48ba-d811002248ba eth0: VF slot 1 added Jun 25 14:52:37.433569 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jun 25 14:52:37.433678 kernel: sd 0:0:0:0: [sda] Write Protect is off Jun 25 14:52:37.433771 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jun 25 14:52:37.433853 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jun 25 14:52:37.433935 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 14:52:37.433944 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jun 25 14:52:37.447779 kernel: hv_vmbus: registering driver hv_pci Jun 25 14:52:37.447844 kernel: hv_pci fda60755-bec5-4a80-81fb-f576f8e7af56: PCI VMBus probing: Using version 0x10004 Jun 25 14:52:37.517554 kernel: hv_pci fda60755-bec5-4a80-81fb-f576f8e7af56: PCI host bridge to bus bec5:00 Jun 25 14:52:37.517731 kernel: pci_bus bec5:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jun 25 14:52:37.517832 kernel: pci_bus bec5:00: No busn resource found for root bus, will use [bus 00-ff] Jun 25 14:52:37.517922 kernel: pci bec5:00:02.0: [15b3:1018] type 00 class 0x020000 Jun 25 14:52:37.518027 kernel: pci bec5:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jun 25 14:52:37.518122 kernel: pci bec5:00:02.0: enabling Extended Tags Jun 25 14:52:37.518229 kernel: pci bec5:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at bec5:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jun 25 14:52:37.518314 kernel: pci_bus bec5:00: busn_res: [bus 00-ff] end is updated to 00 Jun 25 14:52:37.518393 kernel: pci bec5:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jun 25 14:52:37.551341 kernel: mlx5_core bec5:00:02.0: enabling device (0000 -> 0002) Jun 25 14:52:37.776845 kernel: mlx5_core bec5:00:02.0: firmware version: 16.30.1284 Jun 25 14:52:37.777005 kernel: mlx5_core bec5:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0) Jun 25 14:52:37.777101 kernel: hv_netvsc 002248ba-d811-0022-48ba-d811002248ba eth0: VF registering: eth1 Jun 25 14:52:37.777216 kernel: mlx5_core bec5:00:02.0 eth1: joined to eth0 Jun 25 14:52:37.790203 kernel: mlx5_core bec5:00:02.0 enP48837s1: renamed from eth1 Jun 25 14:52:37.881476 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jun 25 14:52:37.914252 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (469) Jun 25 14:52:37.927395 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jun 25 14:52:38.084985 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jun 25 14:52:38.109341 kernel: BTRFS: device fsid 4f04fb4d-edd3-40b1-b587-481b761003a7 devid 1 transid 33 /dev/sda3 scanned by (udev-worker) (468) Jun 25 14:52:38.121290 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jun 25 14:52:38.127195 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jun 25 14:52:38.149657 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 25 14:52:38.170300 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 14:52:38.178208 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 14:52:39.186204 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 14:52:39.186396 disk-uuid[548]: The operation has completed successfully. Jun 25 14:52:39.256005 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 25 14:52:39.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:39.262000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:39.256113 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 25 14:52:39.278708 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 25 14:52:39.290399 sh[660]: Success Jun 25 14:52:39.320209 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jun 25 14:52:39.591600 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 25 14:52:39.597462 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 25 14:52:39.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:39.608768 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 25 14:52:39.649463 kernel: BTRFS info (device dm-0): first mount of filesystem 4f04fb4d-edd3-40b1-b587-481b761003a7 Jun 25 14:52:39.649529 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jun 25 14:52:39.655817 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 25 14:52:39.660851 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 25 14:52:39.664785 kernel: BTRFS info (device dm-0): using free space tree Jun 25 14:52:39.935883 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 25 14:52:39.940576 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 25 14:52:39.959665 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 25 14:52:39.967790 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 25 14:52:39.998280 kernel: BTRFS info (device sda6): first mount of filesystem 2cf05490-8e39-46e6-bd3e-b9f42670b198 Jun 25 14:52:39.998337 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jun 25 14:52:40.002438 kernel: BTRFS info (device sda6): using free space tree Jun 25 14:52:40.045300 systemd[1]: mnt-oem.mount: Deactivated successfully. Jun 25 14:52:40.058309 kernel: BTRFS info (device sda6): last unmount of filesystem 2cf05490-8e39-46e6-bd3e-b9f42670b198 Jun 25 14:52:40.065630 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 25 14:52:40.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:40.079327 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 25 14:52:40.097915 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 14:52:40.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:40.110000 audit: BPF prog-id=9 op=LOAD Jun 25 14:52:40.119394 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 14:52:40.147069 systemd-networkd[842]: lo: Link UP Jun 25 14:52:40.147087 systemd-networkd[842]: lo: Gained carrier Jun 25 14:52:40.164147 kernel: kauditd_printk_skb: 15 callbacks suppressed Jun 25 14:52:40.164171 kernel: audit: type=1130 audit(1719327160.156:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:40.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:40.147541 systemd-networkd[842]: Enumeration completed Jun 25 14:52:40.150453 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 14:52:40.163567 systemd-networkd[842]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 14:52:40.163571 systemd-networkd[842]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 14:52:40.164430 systemd[1]: Reached target network.target - Network. Jun 25 14:52:40.206760 systemd[1]: Starting iscsiuio.service - iSCSI UserSpace I/O driver... Jun 25 14:52:40.250058 kernel: audit: type=1130 audit(1719327160.224:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:40.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:40.215245 systemd[1]: Started iscsiuio.service - iSCSI UserSpace I/O driver. Jun 25 14:52:40.260342 iscsid[848]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jun 25 14:52:40.260342 iscsid[848]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Jun 25 14:52:40.260342 iscsid[848]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jun 25 14:52:40.260342 iscsid[848]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jun 25 14:52:40.260342 iscsid[848]: If using hardware iscsi like qla4xxx this message can be ignored. Jun 25 14:52:40.260342 iscsid[848]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jun 25 14:52:40.260342 iscsid[848]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jun 25 14:52:40.391964 kernel: audit: type=1130 audit(1719327160.264:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:40.391994 kernel: audit: type=1130 audit(1719327160.317:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:40.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:40.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:40.230071 systemd[1]: Starting iscsid.service - Open-iSCSI... Jun 25 14:52:40.260275 systemd[1]: Started iscsid.service - Open-iSCSI. Jun 25 14:52:40.411730 kernel: mlx5_core bec5:00:02.0 enP48837s1: Link up Jun 25 14:52:40.286528 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 25 14:52:40.417000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:40.310596 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 25 14:52:40.450069 kernel: audit: type=1130 audit(1719327160.417:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:40.450098 kernel: hv_netvsc 002248ba-d811-0022-48ba-d811002248ba eth0: Data path switched to VF: enP48837s1 Jun 25 14:52:40.317369 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 14:52:40.462219 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 14:52:40.341651 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 14:52:40.360494 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 14:52:40.389731 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 25 14:52:40.407988 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 25 14:52:40.455792 systemd-networkd[842]: enP48837s1: Link UP Jun 25 14:52:40.455866 systemd-networkd[842]: eth0: Link UP Jun 25 14:52:40.455979 systemd-networkd[842]: eth0: Gained carrier Jun 25 14:52:40.455988 systemd-networkd[842]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 14:52:40.463355 systemd-networkd[842]: enP48837s1: Gained carrier Jun 25 14:52:40.493246 systemd-networkd[842]: eth0: DHCPv4 address 10.200.20.34/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jun 25 14:52:41.028784 ignition[832]: Ignition 2.15.0 Jun 25 14:52:41.032541 ignition[832]: Stage: fetch-offline Jun 25 14:52:41.032600 ignition[832]: no configs at "/usr/lib/ignition/base.d" Jun 25 14:52:41.037243 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 14:52:41.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:41.032609 ignition[832]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 14:52:41.075981 kernel: audit: type=1130 audit(1719327161.047:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:41.032726 ignition[832]: parsed url from cmdline: "" Jun 25 14:52:41.032729 ignition[832]: no config URL provided Jun 25 14:52:41.032733 ignition[832]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 14:52:41.079698 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 25 14:52:41.032741 ignition[832]: no config at "/usr/lib/ignition/user.ign" Jun 25 14:52:41.032754 ignition[832]: failed to fetch config: resource requires networking Jun 25 14:52:41.036294 ignition[832]: Ignition finished successfully Jun 25 14:52:41.090573 ignition[867]: Ignition 2.15.0 Jun 25 14:52:41.090585 ignition[867]: Stage: fetch Jun 25 14:52:41.090784 ignition[867]: no configs at "/usr/lib/ignition/base.d" Jun 25 14:52:41.090798 ignition[867]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 14:52:41.090911 ignition[867]: parsed url from cmdline: "" Jun 25 14:52:41.090914 ignition[867]: no config URL provided Jun 25 14:52:41.090919 ignition[867]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 14:52:41.090927 ignition[867]: no config at "/usr/lib/ignition/user.ign" Jun 25 14:52:41.090962 ignition[867]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jun 25 14:52:41.195899 ignition[867]: GET result: OK Jun 25 14:52:41.195999 ignition[867]: config has been read from IMDS userdata Jun 25 14:52:41.196055 ignition[867]: parsing config with SHA512: b22048df453cf48835c1d215649d1ec42d7534bb21febc604e2b86db1340aa8c82aded4fa4bcd184d4e2a9bad4983d7c9bbd1bf3b5081cbc92f0e28ff7d2ade0 Jun 25 14:52:41.200405 unknown[867]: fetched base config from "system" Jun 25 14:52:41.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:41.200833 ignition[867]: fetch: fetch complete Jun 25 14:52:41.234923 kernel: audit: type=1130 audit(1719327161.210:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:41.200413 unknown[867]: fetched base config from "system" Jun 25 14:52:41.200839 ignition[867]: fetch: fetch passed Jun 25 14:52:41.200425 unknown[867]: fetched user config from "azure" Jun 25 14:52:41.200883 ignition[867]: Ignition finished successfully Jun 25 14:52:41.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:41.205198 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 25 14:52:41.284053 kernel: audit: type=1130 audit(1719327161.259:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:41.251142 ignition[874]: Ignition 2.15.0 Jun 25 14:52:41.235452 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 25 14:52:41.251149 ignition[874]: Stage: kargs Jun 25 14:52:41.254984 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 25 14:52:41.251312 ignition[874]: no configs at "/usr/lib/ignition/base.d" Jun 25 14:52:41.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:41.287274 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 25 14:52:41.345638 kernel: audit: type=1130 audit(1719327161.314:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:41.251325 ignition[874]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 14:52:41.308384 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 25 14:52:41.253370 ignition[874]: kargs: kargs passed Jun 25 14:52:41.314511 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 25 14:52:41.253435 ignition[874]: Ignition finished successfully Jun 25 14:52:41.340700 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 14:52:41.298872 ignition[880]: Ignition 2.15.0 Jun 25 14:52:41.351224 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 14:52:41.298886 ignition[880]: Stage: disks Jun 25 14:52:41.366136 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 14:52:41.299036 ignition[880]: no configs at "/usr/lib/ignition/base.d" Jun 25 14:52:41.380779 systemd[1]: Reached target basic.target - Basic System. Jun 25 14:52:41.299046 ignition[880]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 14:52:41.408511 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 25 14:52:41.300387 ignition[880]: disks: disks passed Jun 25 14:52:41.300433 ignition[880]: Ignition finished successfully Jun 25 14:52:41.501416 systemd-fsck[888]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jun 25 14:52:41.510704 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 25 14:52:41.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:41.538395 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 25 14:52:41.549546 kernel: audit: type=1130 audit(1719327161.516:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:41.591224 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Quota mode: none. Jun 25 14:52:41.591769 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 25 14:52:41.595948 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 25 14:52:41.661293 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 14:52:41.666963 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 25 14:52:41.675461 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jun 25 14:52:41.680562 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 25 14:52:41.680610 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 14:52:41.691985 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 25 14:52:41.755121 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (897) Jun 25 14:52:41.755146 kernel: BTRFS info (device sda6): first mount of filesystem 2cf05490-8e39-46e6-bd3e-b9f42670b198 Jun 25 14:52:41.755156 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jun 25 14:52:41.755165 kernel: BTRFS info (device sda6): using free space tree Jun 25 14:52:41.702985 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 25 14:52:41.762405 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 14:52:41.916352 systemd-networkd[842]: eth0: Gained IPv6LL Jun 25 14:52:42.430737 coreos-metadata[899]: Jun 25 14:52:42.430 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jun 25 14:52:42.445506 coreos-metadata[899]: Jun 25 14:52:42.445 INFO Fetch successful Jun 25 14:52:42.450579 coreos-metadata[899]: Jun 25 14:52:42.450 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jun 25 14:52:42.463497 coreos-metadata[899]: Jun 25 14:52:42.463 INFO Fetch successful Jun 25 14:52:42.478535 coreos-metadata[899]: Jun 25 14:52:42.478 INFO wrote hostname ci-3815.2.4-a-f605b45a38 to /sysroot/etc/hostname Jun 25 14:52:42.487233 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 25 14:52:42.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:42.690035 initrd-setup-root[925]: cut: /sysroot/etc/passwd: No such file or directory Jun 25 14:52:42.742424 initrd-setup-root[932]: cut: /sysroot/etc/group: No such file or directory Jun 25 14:52:42.751316 initrd-setup-root[939]: cut: /sysroot/etc/shadow: No such file or directory Jun 25 14:52:42.760700 initrd-setup-root[946]: cut: /sysroot/etc/gshadow: No such file or directory Jun 25 14:52:43.499154 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 25 14:52:43.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:43.513664 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 25 14:52:43.520122 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 25 14:52:43.542345 kernel: BTRFS info (device sda6): last unmount of filesystem 2cf05490-8e39-46e6-bd3e-b9f42670b198 Jun 25 14:52:43.542626 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 25 14:52:43.559321 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 25 14:52:43.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:43.572044 ignition[1013]: INFO : Ignition 2.15.0 Jun 25 14:52:43.577156 ignition[1013]: INFO : Stage: mount Jun 25 14:52:43.577156 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 14:52:43.577156 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 14:52:43.577156 ignition[1013]: INFO : mount: mount passed Jun 25 14:52:43.577156 ignition[1013]: INFO : Ignition finished successfully Jun 25 14:52:43.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:43.579089 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 25 14:52:43.606840 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 25 14:52:43.619782 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 14:52:43.645206 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1022) Jun 25 14:52:43.657973 kernel: BTRFS info (device sda6): first mount of filesystem 2cf05490-8e39-46e6-bd3e-b9f42670b198 Jun 25 14:52:43.658008 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jun 25 14:52:43.662424 kernel: BTRFS info (device sda6): using free space tree Jun 25 14:52:43.665929 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 14:52:43.692353 ignition[1040]: INFO : Ignition 2.15.0 Jun 25 14:52:43.696293 ignition[1040]: INFO : Stage: files Jun 25 14:52:43.696293 ignition[1040]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 14:52:43.696293 ignition[1040]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 14:52:43.696293 ignition[1040]: DEBUG : files: compiled without relabeling support, skipping Jun 25 14:52:43.726459 ignition[1040]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 25 14:52:43.726459 ignition[1040]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 25 14:52:43.801145 ignition[1040]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 25 14:52:43.808369 ignition[1040]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 25 14:52:43.808369 ignition[1040]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 25 14:52:43.808369 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jun 25 14:52:43.808369 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jun 25 14:52:43.801600 unknown[1040]: wrote ssh authorized keys file for user: core Jun 25 14:52:43.910294 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 25 14:52:44.129079 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jun 25 14:52:44.129079 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jun 25 14:52:44.149776 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jun 25 14:52:44.149776 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 25 14:52:44.149776 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 25 14:52:44.149776 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 14:52:44.149776 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 14:52:44.149776 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 14:52:44.149776 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 14:52:44.149776 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 14:52:44.149776 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 14:52:44.149776 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jun 25 14:52:44.149776 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jun 25 14:52:44.149776 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jun 25 14:52:44.149776 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-arm64.raw: attempt #1 Jun 25 14:52:44.550113 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jun 25 14:52:44.732741 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jun 25 14:52:44.744657 ignition[1040]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jun 25 14:52:44.773399 ignition[1040]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 14:52:44.785064 ignition[1040]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 14:52:44.785064 ignition[1040]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jun 25 14:52:44.785064 ignition[1040]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jun 25 14:52:44.785064 ignition[1040]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jun 25 14:52:44.785064 ignition[1040]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 25 14:52:44.785064 ignition[1040]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 25 14:52:44.785064 ignition[1040]: INFO : files: files passed Jun 25 14:52:44.785064 ignition[1040]: INFO : Ignition finished successfully Jun 25 14:52:44.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:44.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:44.849000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:44.784744 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 25 14:52:44.817686 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 25 14:52:44.828467 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 25 14:52:44.889000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:44.842944 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 25 14:52:44.907596 initrd-setup-root-after-ignition[1066]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 14:52:44.907596 initrd-setup-root-after-ignition[1066]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 25 14:52:44.843045 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 25 14:52:44.930660 initrd-setup-root-after-ignition[1070]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 14:52:44.883006 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 14:52:44.889992 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 25 14:52:44.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:44.957000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:44.929599 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 25 14:52:44.946160 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 25 14:52:44.946289 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 25 14:52:44.957399 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 25 14:52:44.968625 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 25 14:52:44.980927 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 25 14:52:45.003710 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 25 14:52:45.027838 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 14:52:45.033000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:45.044640 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 25 14:52:45.060050 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 25 14:52:45.066169 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 14:52:45.078481 systemd[1]: Stopped target timers.target - Timer Units. Jun 25 14:52:45.089362 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 25 14:52:45.099000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:45.089473 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 14:52:45.099828 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 25 14:52:45.110606 systemd[1]: Stopped target basic.target - Basic System. Jun 25 14:52:45.121983 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 25 14:52:45.133690 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 14:52:45.144101 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 25 14:52:45.155750 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 25 14:52:45.167560 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 14:52:45.180420 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 25 14:52:45.191602 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 25 14:52:45.203385 systemd[1]: Stopped target local-fs-pre.target - Preparation for Local File Systems. Jun 25 14:52:45.214523 systemd[1]: Stopped target swap.target - Swaps. Jun 25 14:52:45.244707 kernel: kauditd_printk_skb: 12 callbacks suppressed Jun 25 14:52:45.244733 kernel: audit: type=1131 audit(1719327165.236:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:45.236000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:45.224682 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 25 14:52:45.224791 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 25 14:52:45.261773 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 25 14:52:45.283000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:45.272796 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 25 14:52:45.272900 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 25 14:52:45.337898 kernel: audit: type=1131 audit(1719327165.283:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:45.337924 kernel: audit: type=1131 audit(1719327165.316:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:45.316000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:45.305686 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 25 14:52:45.363370 kernel: audit: type=1131 audit(1719327165.342:51): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:45.342000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:45.305816 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 14:52:45.391941 kernel: audit: type=1131 audit(1719327165.370:52): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:45.370000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:45.316726 systemd[1]: ignition-files.service: Deactivated successfully. Jun 25 14:52:45.316814 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 25 14:52:45.343218 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jun 25 14:52:45.343317 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 25 14:52:45.415037 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 25 14:52:45.426781 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 25 14:52:45.433000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:45.427012 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 14:52:45.463017 kernel: audit: type=1131 audit(1719327165.433:53): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:45.463218 ignition[1084]: INFO : Ignition 2.15.0 Jun 25 14:52:45.471000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:45.453714 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 25 14:52:45.495672 ignition[1084]: INFO : Stage: umount Jun 25 14:52:45.495672 ignition[1084]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 14:52:45.495672 ignition[1084]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 14:52:45.495672 ignition[1084]: INFO : umount: umount passed Jun 25 14:52:45.495672 ignition[1084]: INFO : Ignition finished successfully Jun 25 14:52:45.599709 kernel: audit: type=1131 audit(1719327165.471:54): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:45.599735 kernel: audit: type=1131 audit(1719327165.501:55): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:45.599746 kernel: audit: type=1131 audit(1719327165.526:56): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:45.599757 kernel: audit: type=1131 audit(1719327165.551:57): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:45.501000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:45.526000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:45.551000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:45.584000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:45.594000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:45.465311 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 25 14:52:45.465530 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 14:52:45.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:45.490700 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 25 14:52:45.490895 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 14:52:45.520640 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 25 14:52:45.520741 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 25 14:52:45.528258 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 25 14:52:45.528830 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 25 14:52:45.528932 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 25 14:52:45.551746 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 25 14:52:45.551800 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 25 14:52:45.701000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:45.585019 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 25 14:52:45.585070 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 25 14:52:45.594940 systemd[1]: Stopped target network.target - Network. Jun 25 14:52:45.735000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:45.604835 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 25 14:52:45.750000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:45.604904 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 14:52:45.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:45.761000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:45.761000 audit: BPF prog-id=6 op=UNLOAD Jun 25 14:52:45.615812 systemd[1]: Stopped target paths.target - Path Units. Jun 25 14:52:45.625818 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 25 14:52:45.638205 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 14:52:45.650028 systemd[1]: Stopped target slices.target - Slice Units. Jun 25 14:52:45.803000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:45.659974 systemd[1]: Stopped target sockets.target - Socket Units. Jun 25 14:52:45.814000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:45.669734 systemd[1]: iscsid.socket: Deactivated successfully. Jun 25 14:52:45.824000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:45.669772 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 14:52:45.834000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:45.680833 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 25 14:52:45.680858 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 14:52:45.690903 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 25 14:52:45.866000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:45.690948 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 25 14:52:45.702446 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 25 14:52:45.712442 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 25 14:52:45.903000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:45.723795 systemd-networkd[842]: eth0: DHCPv6 lease lost Jun 25 14:52:45.909000 audit: BPF prog-id=9 op=UNLOAD Jun 25 14:52:45.914000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:45.725357 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 25 14:52:45.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:45.725466 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 25 14:52:45.947542 kernel: hv_netvsc 002248ba-d811-0022-48ba-d811002248ba eth0: Data path switched from VF: enP48837s1 Jun 25 14:52:45.736453 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 25 14:52:45.736557 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 25 14:52:45.750742 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 25 14:52:45.750845 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 25 14:52:45.974000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:45.762793 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 25 14:52:45.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:45.986000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:45.762834 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 25 14:52:45.784419 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 25 14:52:45.789301 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 25 14:52:45.789381 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 14:52:45.804308 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 25 14:52:46.022000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:45.804386 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 25 14:52:45.818720 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 25 14:52:45.818772 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 25 14:52:45.824467 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 25 14:52:45.824507 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 14:52:45.838845 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 14:52:45.857452 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 25 14:52:45.857548 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 25 14:52:45.858126 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 25 14:52:45.858271 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 14:52:45.874807 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 25 14:52:45.874852 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 25 14:52:45.880947 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 25 14:52:45.880979 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 14:52:45.891896 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 25 14:52:45.891952 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 25 14:52:45.903864 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 25 14:52:46.125000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:45.903907 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 25 14:52:45.914781 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 14:52:46.146000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:45.914818 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 14:52:45.954127 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 25 14:52:45.964259 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 14:52:45.964376 systemd[1]: Stopped systemd-vconsole-setup.service - Setup Virtual Console. Jun 25 14:52:45.975489 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 25 14:52:45.975593 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 25 14:52:46.010244 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 25 14:52:46.010394 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 25 14:52:46.115839 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 25 14:52:46.115942 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 25 14:52:46.219205 systemd-journald[208]: Received SIGTERM from PID 1 (n/a). Jun 25 14:52:46.126131 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 25 14:52:46.219292 iscsid[848]: iscsid shutting down. Jun 25 14:52:46.136036 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 25 14:52:46.136092 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 25 14:52:46.156344 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 25 14:52:46.170494 systemd[1]: Switching root. Jun 25 14:52:46.219469 systemd-journald[208]: Journal stopped Jun 25 14:52:50.516970 kernel: SELinux: Permission cmd in class io_uring not defined in policy. Jun 25 14:52:50.516990 kernel: SELinux: the above unknown classes and permissions will be allowed Jun 25 14:52:50.517000 kernel: SELinux: policy capability network_peer_controls=1 Jun 25 14:52:50.517010 kernel: SELinux: policy capability open_perms=1 Jun 25 14:52:50.517018 kernel: SELinux: policy capability extended_socket_class=1 Jun 25 14:52:50.517025 kernel: SELinux: policy capability always_check_network=0 Jun 25 14:52:50.517034 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 25 14:52:50.517042 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 25 14:52:50.517050 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 25 14:52:50.517058 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 25 14:52:50.517069 systemd[1]: Successfully loaded SELinux policy in 261.530ms. Jun 25 14:52:50.517079 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.682ms. Jun 25 14:52:50.517089 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jun 25 14:52:50.517098 systemd[1]: Detected virtualization microsoft. Jun 25 14:52:50.517108 systemd[1]: Detected architecture arm64. Jun 25 14:52:50.517117 systemd[1]: Detected first boot. Jun 25 14:52:50.517126 systemd[1]: Hostname set to . Jun 25 14:52:50.517135 systemd[1]: Initializing machine ID from random generator. Jun 25 14:52:50.517144 systemd[1]: Populated /etc with preset unit settings. Jun 25 14:52:50.517154 systemd[1]: iscsiuio.service: Deactivated successfully. Jun 25 14:52:50.517163 systemd[1]: Stopped iscsiuio.service - iSCSI UserSpace I/O driver. Jun 25 14:52:50.517171 systemd[1]: iscsid.service: Deactivated successfully. Jun 25 14:52:50.517193 systemd[1]: Stopped iscsid.service - Open-iSCSI. Jun 25 14:52:50.517204 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 25 14:52:50.517213 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 25 14:52:50.517222 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 25 14:52:50.517232 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 25 14:52:50.517242 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 25 14:52:50.517251 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 25 14:52:50.517262 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 25 14:52:50.517271 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 25 14:52:50.517280 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 25 14:52:50.517289 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 25 14:52:50.517299 systemd[1]: Created slice user.slice - User and Session Slice. Jun 25 14:52:50.517308 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 14:52:50.517317 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 25 14:52:50.517326 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 25 14:52:50.517335 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 25 14:52:50.517346 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 25 14:52:50.517355 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 25 14:52:50.517368 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 25 14:52:50.517377 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 25 14:52:50.517386 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 14:52:50.517396 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 14:52:50.517405 systemd[1]: Reached target slices.target - Slice Units. Jun 25 14:52:50.517415 systemd[1]: Reached target swap.target - Swaps. Jun 25 14:52:50.517424 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 25 14:52:50.517434 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 25 14:52:50.517443 systemd[1]: Listening on systemd-initctl.socket - initctl Compatibility Named Pipe. Jun 25 14:52:50.517453 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 14:52:50.517462 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 14:52:50.517473 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 14:52:50.517482 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 25 14:52:50.517492 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 25 14:52:50.517501 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 25 14:52:50.517510 systemd[1]: Mounting media.mount - External Media Directory... Jun 25 14:52:50.517519 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 25 14:52:50.517529 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 25 14:52:50.517539 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 25 14:52:50.517549 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 25 14:52:50.517558 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 14:52:50.517569 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 14:52:50.517578 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 25 14:52:50.517588 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 14:52:50.517597 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 14:52:50.517606 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 14:52:50.517616 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 25 14:52:50.517626 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 14:52:50.517636 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 25 14:52:50.517646 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 25 14:52:50.517655 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 25 14:52:50.517664 kernel: kauditd_printk_skb: 47 callbacks suppressed Jun 25 14:52:50.517674 kernel: audit: type=1131 audit(1719327170.323:105): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:50.517682 kernel: fuse: init (API version 7.37) Jun 25 14:52:50.517692 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 25 14:52:50.517702 systemd[1]: Stopped systemd-fsck-usr.service. Jun 25 14:52:50.517711 kernel: audit: type=1131 audit(1719327170.359:106): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:50.517721 systemd[1]: Stopped systemd-journald.service - Journal Service. Jun 25 14:52:50.517731 kernel: audit: type=1130 audit(1719327170.385:107): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:50.517739 systemd[1]: systemd-journald.service: Consumed 3.153s CPU time. Jun 25 14:52:50.517748 kernel: loop: module loaded Jun 25 14:52:50.517758 kernel: audit: type=1131 audit(1719327170.385:108): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:50.517768 kernel: audit: type=1334 audit(1719327170.407:109): prog-id=18 op=LOAD Jun 25 14:52:50.517777 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 14:52:50.517786 kernel: audit: type=1334 audit(1719327170.409:110): prog-id=19 op=LOAD Jun 25 14:52:50.517795 kernel: audit: type=1334 audit(1719327170.409:111): prog-id=20 op=LOAD Jun 25 14:52:50.517803 kernel: audit: type=1334 audit(1719327170.409:112): prog-id=16 op=UNLOAD Jun 25 14:52:50.517811 kernel: audit: type=1334 audit(1719327170.409:113): prog-id=17 op=UNLOAD Jun 25 14:52:50.517820 kernel: ACPI: bus type drm_connector registered Jun 25 14:52:50.517829 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 14:52:50.517840 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 25 14:52:50.517849 kernel: audit: type=1305 audit(1719327170.510:114): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jun 25 14:52:50.517861 systemd-journald[1217]: Journal started Jun 25 14:52:50.517897 systemd-journald[1217]: Runtime Journal (/run/log/journal/f264c8e5276d4ae1983b8f11fcc84be8) is 8.0M, max 78.6M, 70.6M free. Jun 25 14:52:47.450000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 25 14:52:47.875000 audit: BPF prog-id=10 op=LOAD Jun 25 14:52:47.875000 audit: BPF prog-id=10 op=UNLOAD Jun 25 14:52:47.875000 audit: BPF prog-id=11 op=LOAD Jun 25 14:52:47.875000 audit: BPF prog-id=11 op=UNLOAD Jun 25 14:52:49.683000 audit: BPF prog-id=12 op=LOAD Jun 25 14:52:49.683000 audit: BPF prog-id=3 op=UNLOAD Jun 25 14:52:49.683000 audit: BPF prog-id=13 op=LOAD Jun 25 14:52:49.683000 audit: BPF prog-id=14 op=LOAD Jun 25 14:52:49.683000 audit: BPF prog-id=4 op=UNLOAD Jun 25 14:52:49.683000 audit: BPF prog-id=5 op=UNLOAD Jun 25 14:52:49.684000 audit: BPF prog-id=15 op=LOAD Jun 25 14:52:49.684000 audit: BPF prog-id=12 op=UNLOAD Jun 25 14:52:49.684000 audit: BPF prog-id=16 op=LOAD Jun 25 14:52:49.684000 audit: BPF prog-id=17 op=LOAD Jun 25 14:52:49.684000 audit: BPF prog-id=13 op=UNLOAD Jun 25 14:52:49.684000 audit: BPF prog-id=14 op=UNLOAD Jun 25 14:52:49.685000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:49.697000 audit: BPF prog-id=15 op=UNLOAD Jun 25 14:52:49.702000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:49.716000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:49.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:49.732000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:50.323000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:50.359000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:50.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:50.385000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:50.407000 audit: BPF prog-id=18 op=LOAD Jun 25 14:52:50.409000 audit: BPF prog-id=19 op=LOAD Jun 25 14:52:50.409000 audit: BPF prog-id=20 op=LOAD Jun 25 14:52:50.409000 audit: BPF prog-id=16 op=UNLOAD Jun 25 14:52:50.409000 audit: BPF prog-id=17 op=UNLOAD Jun 25 14:52:50.510000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jun 25 14:52:49.676129 systemd[1]: Queued start job for default target multi-user.target. Jun 25 14:52:49.676140 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jun 25 14:52:49.685021 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 25 14:52:49.685419 systemd[1]: systemd-journald.service: Consumed 3.153s CPU time. Jun 25 14:52:50.510000 audit[1217]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffd3f34fe0 a2=4000 a3=1 items=0 ppid=1 pid=1217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:50.510000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jun 25 14:52:50.548878 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 25 14:52:50.561497 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 14:52:50.570154 systemd[1]: verity-setup.service: Deactivated successfully. Jun 25 14:52:50.570249 systemd[1]: Stopped verity-setup.service. Jun 25 14:52:50.574000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:50.582200 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 14:52:50.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:50.588009 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 25 14:52:50.593896 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 25 14:52:50.599891 systemd[1]: Mounted media.mount - External Media Directory. Jun 25 14:52:50.604795 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 25 14:52:50.610639 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 25 14:52:50.616148 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 25 14:52:50.621046 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 25 14:52:50.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:50.627139 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 14:52:50.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:50.633021 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 25 14:52:50.633175 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 25 14:52:50.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:50.638000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:50.639335 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 14:52:50.639484 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 14:52:50.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:50.645000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:50.645825 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 14:52:50.645977 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 14:52:50.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:50.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:50.651522 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 14:52:50.651667 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 14:52:50.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:50.657000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:50.657760 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 25 14:52:50.657907 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 25 14:52:50.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:50.663000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:50.663727 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 14:52:50.663872 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 14:52:50.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:50.669000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:50.669682 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 14:52:50.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:50.675767 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 25 14:52:50.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:50.682004 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 25 14:52:50.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:50.687870 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 14:52:50.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:50.694292 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 25 14:52:50.703317 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 25 14:52:50.710314 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 25 14:52:50.715412 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 25 14:52:50.717296 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 25 14:52:50.723971 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 25 14:52:50.729171 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 14:52:50.730475 systemd[1]: Starting systemd-random-seed.service - Load/Save Random Seed... Jun 25 14:52:50.735487 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 14:52:50.736837 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 14:52:50.751373 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 25 14:52:50.760276 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jun 25 14:52:50.767501 systemd-journald[1217]: Time spent on flushing to /var/log/journal/f264c8e5276d4ae1983b8f11fcc84be8 is 17.569ms for 1041 entries. Jun 25 14:52:50.767501 systemd-journald[1217]: System Journal (/var/log/journal/f264c8e5276d4ae1983b8f11fcc84be8) is 8.0M, max 2.6G, 2.6G free. Jun 25 14:52:50.821132 systemd-journald[1217]: Received client request to flush runtime journal. Jun 25 14:52:50.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:50.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:50.775616 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 25 14:52:50.821472 udevadm[1231]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jun 25 14:52:50.782123 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 25 14:52:50.788999 systemd[1]: Finished systemd-random-seed.service - Load/Save Random Seed. Jun 25 14:52:50.796848 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 25 14:52:50.815008 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 14:52:50.822120 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 25 14:52:50.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:51.021779 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 25 14:52:51.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:51.789769 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 25 14:52:51.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:51.796000 audit: BPF prog-id=21 op=LOAD Jun 25 14:52:51.796000 audit: BPF prog-id=22 op=LOAD Jun 25 14:52:51.796000 audit: BPF prog-id=7 op=UNLOAD Jun 25 14:52:51.796000 audit: BPF prog-id=8 op=UNLOAD Jun 25 14:52:51.799421 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 14:52:51.832834 systemd-udevd[1234]: Using default interface naming scheme 'v252'. Jun 25 14:52:52.039216 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 14:52:52.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:52.054000 audit: BPF prog-id=23 op=LOAD Jun 25 14:52:52.059356 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 14:52:52.100000 audit: BPF prog-id=24 op=LOAD Jun 25 14:52:52.102208 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1235) Jun 25 14:52:52.104000 audit: BPF prog-id=25 op=LOAD Jun 25 14:52:52.104000 audit: BPF prog-id=26 op=LOAD Jun 25 14:52:52.113552 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 25 14:52:52.123126 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jun 25 14:52:52.157144 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 25 14:52:52.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:52.205208 kernel: mousedev: PS/2 mouse device common for all mice Jun 25 14:52:52.205281 kernel: hv_vmbus: registering driver hv_balloon Jun 25 14:52:52.214037 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jun 25 14:52:52.214121 kernel: hv_balloon: Memory hot add disabled on ARM64 Jun 25 14:52:52.218647 kernel: hv_vmbus: registering driver hyperv_fb Jun 25 14:52:52.237981 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jun 25 14:52:52.238070 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jun 25 14:52:52.243209 kernel: hv_utils: Registering HyperV Utility Driver Jun 25 14:52:52.247288 kernel: Console: switching to colour dummy device 80x25 Jun 25 14:52:52.247336 kernel: hv_vmbus: registering driver hv_utils Jun 25 14:52:52.253798 kernel: Console: switching to colour frame buffer device 128x48 Jun 25 14:52:52.264451 kernel: hv_utils: Heartbeat IC version 3.0 Jun 25 14:52:52.267967 kernel: hv_utils: Shutdown IC version 3.2 Jun 25 14:52:52.268013 kernel: hv_utils: TimeSync IC version 4.0 Jun 25 14:52:52.117481 systemd-networkd[1255]: lo: Link UP Jun 25 14:52:52.239000 systemd-journald[1217]: Time jumped backwards, rotating. Jun 25 14:52:52.239073 kernel: mlx5_core bec5:00:02.0 enP48837s1: Link up Jun 25 14:52:52.239244 kernel: hv_netvsc 002248ba-d811-0022-48ba-d811002248ba eth0: Data path switched to VF: enP48837s1 Jun 25 14:52:52.239349 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1249) Jun 25 14:52:52.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:52.117494 systemd-networkd[1255]: lo: Gained carrier Jun 25 14:52:52.117914 systemd-networkd[1255]: Enumeration completed Jun 25 14:52:52.122218 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 14:52:52.127205 systemd-networkd[1255]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 14:52:52.127208 systemd-networkd[1255]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 14:52:52.131963 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 25 14:52:52.201971 systemd-networkd[1255]: enP48837s1: Link UP Jun 25 14:52:52.202061 systemd-networkd[1255]: eth0: Link UP Jun 25 14:52:52.202064 systemd-networkd[1255]: eth0: Gained carrier Jun 25 14:52:52.202077 systemd-networkd[1255]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 14:52:52.207911 systemd-networkd[1255]: enP48837s1: Gained carrier Jun 25 14:52:52.213915 systemd-networkd[1255]: eth0: DHCPv4 address 10.200.20.34/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jun 25 14:52:52.263947 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jun 25 14:52:52.270017 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jun 25 14:52:52.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:52.286008 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jun 25 14:52:52.426337 lvm[1317]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 14:52:52.448670 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jun 25 14:52:52.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:52.454643 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 14:52:52.463029 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jun 25 14:52:52.466779 lvm[1318]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 14:52:52.489765 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jun 25 14:52:52.495000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:52.496655 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 14:52:52.502751 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 25 14:52:52.502794 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 14:52:52.508834 systemd[1]: Reached target machines.target - Containers. Jun 25 14:52:52.521952 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 25 14:52:52.527524 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 14:52:52.527598 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 14:52:52.528999 systemd[1]: Starting systemd-boot-update.service - Automatic Boot Loader Update... Jun 25 14:52:52.535715 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 25 14:52:52.542791 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jun 25 14:52:52.550011 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 25 14:52:52.579979 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1320 (bootctl) Jun 25 14:52:52.583983 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service - File System Check on /dev/disk/by-label/EFI-SYSTEM... Jun 25 14:52:52.593239 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 25 14:52:52.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:52.605809 kernel: loop0: detected capacity change from 0 to 55744 Jun 25 14:52:53.282766 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 25 14:52:53.284136 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jun 25 14:52:53.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:53.312690 systemd-fsck[1327]: fsck.fat 4.2 (2021-01-31) Jun 25 14:52:53.312690 systemd-fsck[1327]: /dev/sda1: 242 files, 114659/258078 clusters Jun 25 14:52:53.315742 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service - File System Check on /dev/disk/by-label/EFI-SYSTEM. Jun 25 14:52:53.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:53.329924 systemd[1]: Mounting boot.mount - Boot partition... Jun 25 14:52:53.338515 systemd[1]: Mounted boot.mount - Boot partition. Jun 25 14:52:53.349140 systemd[1]: Finished systemd-boot-update.service - Automatic Boot Loader Update. Jun 25 14:52:53.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:53.528804 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 25 14:52:53.575807 kernel: loop1: detected capacity change from 0 to 193208 Jun 25 14:52:53.608806 kernel: loop2: detected capacity change from 0 to 59648 Jun 25 14:52:53.636931 systemd-networkd[1255]: eth0: Gained IPv6LL Jun 25 14:52:53.642736 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 25 14:52:53.647000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:53.933807 kernel: loop3: detected capacity change from 0 to 113264 Jun 25 14:52:54.406821 kernel: loop4: detected capacity change from 0 to 55744 Jun 25 14:52:54.414802 kernel: loop5: detected capacity change from 0 to 193208 Jun 25 14:52:54.423800 kernel: loop6: detected capacity change from 0 to 59648 Jun 25 14:52:54.432806 kernel: loop7: detected capacity change from 0 to 113264 Jun 25 14:52:54.435936 (sd-sysext)[1337]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jun 25 14:52:54.437535 (sd-sysext)[1337]: Merged extensions into '/usr'. Jun 25 14:52:54.438958 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 25 14:52:54.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:54.452022 systemd[1]: Starting ensure-sysext.service... Jun 25 14:52:54.457120 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 14:52:54.486528 systemd[1]: Reloading. Jun 25 14:52:54.503594 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jun 25 14:52:54.505924 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 25 14:52:54.520234 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 25 14:52:54.549229 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 25 14:52:54.635440 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 14:52:54.705000 audit: BPF prog-id=27 op=LOAD Jun 25 14:52:54.706000 audit: BPF prog-id=24 op=UNLOAD Jun 25 14:52:54.706000 audit: BPF prog-id=28 op=LOAD Jun 25 14:52:54.706000 audit: BPF prog-id=29 op=LOAD Jun 25 14:52:54.706000 audit: BPF prog-id=25 op=UNLOAD Jun 25 14:52:54.706000 audit: BPF prog-id=26 op=UNLOAD Jun 25 14:52:54.707000 audit: BPF prog-id=30 op=LOAD Jun 25 14:52:54.707000 audit: BPF prog-id=31 op=LOAD Jun 25 14:52:54.707000 audit: BPF prog-id=21 op=UNLOAD Jun 25 14:52:54.707000 audit: BPF prog-id=22 op=UNLOAD Jun 25 14:52:54.709000 audit: BPF prog-id=32 op=LOAD Jun 25 14:52:54.709000 audit: BPF prog-id=18 op=UNLOAD Jun 25 14:52:54.709000 audit: BPF prog-id=33 op=LOAD Jun 25 14:52:54.709000 audit: BPF prog-id=34 op=LOAD Jun 25 14:52:54.709000 audit: BPF prog-id=19 op=UNLOAD Jun 25 14:52:54.709000 audit: BPF prog-id=20 op=UNLOAD Jun 25 14:52:54.710000 audit: BPF prog-id=35 op=LOAD Jun 25 14:52:54.710000 audit: BPF prog-id=23 op=UNLOAD Jun 25 14:52:54.713747 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 14:52:54.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:54.726836 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 14:52:54.737241 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 25 14:52:54.744054 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 25 14:52:54.754000 audit: BPF prog-id=36 op=LOAD Jun 25 14:52:54.763024 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 14:52:54.768000 audit: BPF prog-id=37 op=LOAD Jun 25 14:52:54.773951 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 25 14:52:54.780393 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 25 14:52:54.792431 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 14:52:54.797443 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 14:52:54.796000 audit[1426]: SYSTEM_BOOT pid=1426 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jun 25 14:52:54.806453 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 14:52:54.814149 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 14:52:54.819549 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 14:52:54.819677 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 14:52:54.820484 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 14:52:54.820616 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 14:52:54.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:54.825000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:54.826702 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 14:52:54.827008 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 14:52:54.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:54.832000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:54.834083 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 14:52:54.834230 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 14:52:54.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:54.839000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:54.844349 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 14:52:54.850411 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 14:52:54.860360 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 14:52:54.868920 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 14:52:54.876491 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 14:52:54.876633 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 14:52:54.877649 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 25 14:52:54.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:54.884210 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 14:52:54.884339 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 14:52:54.889000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:54.889000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:54.891110 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 14:52:54.891266 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 14:52:54.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:54.896000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:54.897641 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 14:52:54.898027 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 14:52:54.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:54.903000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:54.907503 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 14:52:54.913100 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 14:52:54.921639 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 14:52:54.936121 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 14:52:54.943704 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 14:52:54.949103 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 14:52:54.949237 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 14:52:54.950058 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 25 14:52:54.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:54.961803 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 25 14:52:54.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:54.968090 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 14:52:54.968219 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 14:52:54.969308 systemd-resolved[1419]: Positive Trust Anchors: Jun 25 14:52:54.969318 systemd-resolved[1419]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 14:52:54.969344 systemd-resolved[1419]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jun 25 14:52:54.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:54.974000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:54.975214 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 14:52:54.975354 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 14:52:54.980000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jun 25 14:52:54.980000 audit[1444]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffca2d4fd0 a2=420 a3=0 items=0 ppid=1415 pid=1444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:54.980000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jun 25 14:52:54.982208 augenrules[1444]: No rules Jun 25 14:52:54.982133 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 14:52:54.982266 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 14:52:54.989170 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 14:52:54.995614 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 14:52:54.995755 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 14:52:55.002200 systemd[1]: Reached target time-set.target - System Time Set. Jun 25 14:52:55.007394 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 14:52:55.007455 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 14:52:55.007756 systemd[1]: Finished ensure-sysext.service. Jun 25 14:52:55.022372 systemd-resolved[1419]: Using system hostname 'ci-3815.2.4-a-f605b45a38'. Jun 25 14:52:55.024126 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 14:52:55.030294 systemd[1]: Reached target network.target - Network. Jun 25 14:52:55.035283 systemd[1]: Reached target network-online.target - Network is Online. Jun 25 14:52:55.040964 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 14:52:55.065045 systemd-timesyncd[1423]: Contacted time server 162.159.200.1:123 (0.flatcar.pool.ntp.org). Jun 25 14:52:55.066294 systemd-timesyncd[1423]: Initial clock synchronization to Tue 2024-06-25 14:52:55.065964 UTC. Jun 25 14:52:55.355809 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 25 14:52:55.363293 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 25 14:52:58.176585 ldconfig[1319]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 25 14:52:58.198220 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 25 14:52:58.208174 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 25 14:52:58.220459 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 25 14:52:58.226681 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 14:52:58.232122 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 25 14:52:58.238011 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 25 14:52:58.244007 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 25 14:52:58.249616 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 25 14:52:58.255140 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 25 14:52:58.260622 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 25 14:52:58.260653 systemd[1]: Reached target paths.target - Path Units. Jun 25 14:52:58.266211 systemd[1]: Reached target timers.target - Timer Units. Jun 25 14:52:58.272186 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 25 14:52:58.279236 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 25 14:52:58.291768 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 25 14:52:58.297577 systemd[1]: systemd-pcrphase-sysinit.service - TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 14:52:58.298137 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 25 14:52:58.303776 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 14:52:58.308553 systemd[1]: Reached target basic.target - Basic System. Jun 25 14:52:58.315121 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 25 14:52:58.315158 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 25 14:52:58.321929 systemd[1]: Starting containerd.service - containerd container runtime... Jun 25 14:52:58.329008 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 25 14:52:58.335531 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 25 14:52:58.341551 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 25 14:52:58.348350 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 25 14:52:58.349672 jq[1459]: false Jun 25 14:52:58.353859 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 25 14:52:58.372920 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:52:58.380289 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 25 14:52:58.386637 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 25 14:52:58.396491 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 25 14:52:58.412107 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 25 14:52:58.422279 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 25 14:52:58.430934 extend-filesystems[1460]: Found loop4 Jun 25 14:52:58.430934 extend-filesystems[1460]: Found loop5 Jun 25 14:52:58.430934 extend-filesystems[1460]: Found loop6 Jun 25 14:52:58.430934 extend-filesystems[1460]: Found loop7 Jun 25 14:52:58.430934 extend-filesystems[1460]: Found sda Jun 25 14:52:58.430934 extend-filesystems[1460]: Found sda1 Jun 25 14:52:58.430934 extend-filesystems[1460]: Found sda2 Jun 25 14:52:58.430934 extend-filesystems[1460]: Found sda3 Jun 25 14:52:58.430934 extend-filesystems[1460]: Found usr Jun 25 14:52:58.430934 extend-filesystems[1460]: Found sda4 Jun 25 14:52:58.430934 extend-filesystems[1460]: Found sda6 Jun 25 14:52:58.430934 extend-filesystems[1460]: Found sda7 Jun 25 14:52:58.430934 extend-filesystems[1460]: Found sda9 Jun 25 14:52:58.430934 extend-filesystems[1460]: Checking size of /dev/sda9 Jun 25 14:52:58.708946 extend-filesystems[1460]: Old size kept for /dev/sda9 Jun 25 14:52:58.708946 extend-filesystems[1460]: Found sr0 Jun 25 14:52:58.518500 dbus-daemon[1458]: [system] SELinux support is enabled Jun 25 14:52:58.432880 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 25 14:52:58.560963 dbus-daemon[1458]: [system] Successfully activated service 'org.freedesktop.systemd1' Jun 25 14:52:58.439265 systemd[1]: systemd-pcrphase.service - TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 14:52:58.766597 update_engine[1479]: I0625 14:52:58.511137 1479 main.cc:92] Flatcar Update Engine starting Jun 25 14:52:58.766597 update_engine[1479]: I0625 14:52:58.548267 1479 update_check_scheduler.cc:74] Next update check in 7m21s Jun 25 14:52:58.439339 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 25 14:52:58.767003 jq[1485]: true Jun 25 14:52:58.439834 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 25 14:52:58.444181 systemd[1]: Starting update-engine.service - Update Engine... Jun 25 14:52:58.462045 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 25 14:52:58.767594 tar[1489]: linux-arm64/helm Jun 25 14:52:58.469559 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 25 14:52:58.469821 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 25 14:52:58.772561 jq[1490]: true Jun 25 14:52:58.475153 systemd[1]: motdgen.service: Deactivated successfully. Jun 25 14:52:58.475374 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 25 14:52:58.482676 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 25 14:52:58.489559 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 25 14:52:58.490371 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 25 14:52:58.518699 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 25 14:52:58.534853 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 25 14:52:58.535058 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 25 14:52:58.544434 systemd-logind[1476]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 25 14:52:58.544907 systemd-logind[1476]: New seat seat0. Jun 25 14:52:58.560443 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 25 14:52:58.560484 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 25 14:52:58.570094 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 25 14:52:58.570116 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 25 14:52:58.580723 systemd[1]: Started systemd-logind.service - User Login Management. Jun 25 14:52:58.590003 systemd[1]: Started update-engine.service - Update Engine. Jun 25 14:52:58.609111 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 25 14:52:58.789235 coreos-metadata[1455]: Jun 25 14:52:58.781 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jun 25 14:52:58.800627 coreos-metadata[1455]: Jun 25 14:52:58.800 INFO Fetch successful Jun 25 14:52:58.800627 coreos-metadata[1455]: Jun 25 14:52:58.800 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jun 25 14:52:58.802866 bash[1527]: Updated "/home/core/.ssh/authorized_keys" Jun 25 14:52:58.804335 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 25 14:52:58.812044 coreos-metadata[1455]: Jun 25 14:52:58.810 INFO Fetch successful Jun 25 14:52:58.812044 coreos-metadata[1455]: Jun 25 14:52:58.810 INFO Fetching http://168.63.129.16/machine/16abbab7-8c0b-4d9d-a4ac-f798a9be3bc2/7d95ef51%2D82f0%2D47f5%2D885e%2D7baecd685449.%5Fci%2D3815.2.4%2Da%2Df605b45a38?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jun 25 14:52:58.811690 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jun 25 14:52:58.814854 coreos-metadata[1455]: Jun 25 14:52:58.813 INFO Fetch successful Jun 25 14:52:58.814854 coreos-metadata[1455]: Jun 25 14:52:58.813 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jun 25 14:52:58.824805 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1501) Jun 25 14:52:58.829736 coreos-metadata[1455]: Jun 25 14:52:58.829 INFO Fetch successful Jun 25 14:52:58.849296 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 25 14:52:58.857200 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 25 14:52:58.966742 locksmithd[1514]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 25 14:52:59.322626 containerd[1492]: time="2024-06-25T14:52:59.322502679Z" level=info msg="starting containerd" revision=99b8088b873ba42b788f29ccd0dc26ebb6952f1e version=v1.7.13 Jun 25 14:52:59.407311 containerd[1492]: time="2024-06-25T14:52:59.407260183Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jun 25 14:52:59.413082 containerd[1492]: time="2024-06-25T14:52:59.413047284Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jun 25 14:52:59.414606 containerd[1492]: time="2024-06-25T14:52:59.414568784Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.1.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jun 25 14:52:59.414720 containerd[1492]: time="2024-06-25T14:52:59.414704233Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jun 25 14:52:59.415032 containerd[1492]: time="2024-06-25T14:52:59.415006293Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 14:52:59.415117 containerd[1492]: time="2024-06-25T14:52:59.415102019Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jun 25 14:52:59.415249 containerd[1492]: time="2024-06-25T14:52:59.415232988Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jun 25 14:52:59.415373 containerd[1492]: time="2024-06-25T14:52:59.415355916Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 14:52:59.415431 containerd[1492]: time="2024-06-25T14:52:59.415418320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jun 25 14:52:59.415541 containerd[1492]: time="2024-06-25T14:52:59.415525967Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jun 25 14:52:59.415829 containerd[1492]: time="2024-06-25T14:52:59.415778064Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jun 25 14:52:59.415912 containerd[1492]: time="2024-06-25T14:52:59.415893952Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jun 25 14:52:59.415963 containerd[1492]: time="2024-06-25T14:52:59.415951555Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jun 25 14:52:59.416142 containerd[1492]: time="2024-06-25T14:52:59.416122727Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 14:52:59.416209 containerd[1492]: time="2024-06-25T14:52:59.416195811Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jun 25 14:52:59.416312 containerd[1492]: time="2024-06-25T14:52:59.416296458Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jun 25 14:52:59.416381 containerd[1492]: time="2024-06-25T14:52:59.416365423Z" level=info msg="metadata content store policy set" policy=shared Jun 25 14:52:59.443815 containerd[1492]: time="2024-06-25T14:52:59.441806659Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jun 25 14:52:59.443815 containerd[1492]: time="2024-06-25T14:52:59.441853982Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jun 25 14:52:59.443815 containerd[1492]: time="2024-06-25T14:52:59.441867663Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jun 25 14:52:59.443815 containerd[1492]: time="2024-06-25T14:52:59.441911946Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jun 25 14:52:59.443815 containerd[1492]: time="2024-06-25T14:52:59.441926707Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jun 25 14:52:59.443815 containerd[1492]: time="2024-06-25T14:52:59.441937187Z" level=info msg="NRI interface is disabled by configuration." Jun 25 14:52:59.443815 containerd[1492]: time="2024-06-25T14:52:59.441949388Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jun 25 14:52:59.443815 containerd[1492]: time="2024-06-25T14:52:59.442157842Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jun 25 14:52:59.443815 containerd[1492]: time="2024-06-25T14:52:59.442179603Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jun 25 14:52:59.443815 containerd[1492]: time="2024-06-25T14:52:59.442193204Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jun 25 14:52:59.443815 containerd[1492]: time="2024-06-25T14:52:59.442206725Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jun 25 14:52:59.443815 containerd[1492]: time="2024-06-25T14:52:59.442222326Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jun 25 14:52:59.443815 containerd[1492]: time="2024-06-25T14:52:59.442239487Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jun 25 14:52:59.443815 containerd[1492]: time="2024-06-25T14:52:59.442252088Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jun 25 14:52:59.444158 containerd[1492]: time="2024-06-25T14:52:59.442266369Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jun 25 14:52:59.444158 containerd[1492]: time="2024-06-25T14:52:59.442281810Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jun 25 14:52:59.444158 containerd[1492]: time="2024-06-25T14:52:59.442294811Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jun 25 14:52:59.444158 containerd[1492]: time="2024-06-25T14:52:59.442306812Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jun 25 14:52:59.444158 containerd[1492]: time="2024-06-25T14:52:59.442320012Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jun 25 14:52:59.444158 containerd[1492]: time="2024-06-25T14:52:59.442406698Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jun 25 14:52:59.444158 containerd[1492]: time="2024-06-25T14:52:59.442673636Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jun 25 14:52:59.444158 containerd[1492]: time="2024-06-25T14:52:59.442700358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jun 25 14:52:59.444158 containerd[1492]: time="2024-06-25T14:52:59.442713478Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jun 25 14:52:59.444158 containerd[1492]: time="2024-06-25T14:52:59.442735400Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jun 25 14:52:59.444158 containerd[1492]: time="2024-06-25T14:52:59.442818925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jun 25 14:52:59.444158 containerd[1492]: time="2024-06-25T14:52:59.442836366Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jun 25 14:52:59.444158 containerd[1492]: time="2024-06-25T14:52:59.442848287Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jun 25 14:52:59.444158 containerd[1492]: time="2024-06-25T14:52:59.442860168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jun 25 14:52:59.444393 containerd[1492]: time="2024-06-25T14:52:59.442871489Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jun 25 14:52:59.444393 containerd[1492]: time="2024-06-25T14:52:59.442884170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jun 25 14:52:59.444393 containerd[1492]: time="2024-06-25T14:52:59.442896210Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jun 25 14:52:59.444393 containerd[1492]: time="2024-06-25T14:52:59.442909051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jun 25 14:52:59.444393 containerd[1492]: time="2024-06-25T14:52:59.442923612Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jun 25 14:52:59.444393 containerd[1492]: time="2024-06-25T14:52:59.443053421Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jun 25 14:52:59.444393 containerd[1492]: time="2024-06-25T14:52:59.443069342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jun 25 14:52:59.444393 containerd[1492]: time="2024-06-25T14:52:59.443082143Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jun 25 14:52:59.444393 containerd[1492]: time="2024-06-25T14:52:59.443094143Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jun 25 14:52:59.444393 containerd[1492]: time="2024-06-25T14:52:59.443106024Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jun 25 14:52:59.444393 containerd[1492]: time="2024-06-25T14:52:59.443123465Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jun 25 14:52:59.444393 containerd[1492]: time="2024-06-25T14:52:59.443135746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jun 25 14:52:59.444393 containerd[1492]: time="2024-06-25T14:52:59.443146107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jun 25 14:52:59.444624 containerd[1492]: time="2024-06-25T14:52:59.443382482Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jun 25 14:52:59.444624 containerd[1492]: time="2024-06-25T14:52:59.443436526Z" level=info msg="Connect containerd service" Jun 25 14:52:59.444624 containerd[1492]: time="2024-06-25T14:52:59.443462808Z" level=info msg="using legacy CRI server" Jun 25 14:52:59.444624 containerd[1492]: time="2024-06-25T14:52:59.443469568Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 25 14:52:59.444624 containerd[1492]: time="2024-06-25T14:52:59.443496490Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jun 25 14:52:59.445332 containerd[1492]: time="2024-06-25T14:52:59.445211523Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 14:52:59.446681 containerd[1492]: time="2024-06-25T14:52:59.446653938Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jun 25 14:52:59.446846 containerd[1492]: time="2024-06-25T14:52:59.446720062Z" level=info msg="Start subscribing containerd event" Jun 25 14:52:59.446893 containerd[1492]: time="2024-06-25T14:52:59.446861432Z" level=info msg="Start recovering state" Jun 25 14:52:59.446944 containerd[1492]: time="2024-06-25T14:52:59.446807548Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jun 25 14:52:59.447010 containerd[1492]: time="2024-06-25T14:52:59.446996441Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jun 25 14:52:59.447066 containerd[1492]: time="2024-06-25T14:52:59.447052564Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin" Jun 25 14:52:59.447213 containerd[1492]: time="2024-06-25T14:52:59.446931876Z" level=info msg="Start event monitor" Jun 25 14:52:59.447213 containerd[1492]: time="2024-06-25T14:52:59.447193214Z" level=info msg="Start snapshots syncer" Jun 25 14:52:59.447213 containerd[1492]: time="2024-06-25T14:52:59.447202894Z" level=info msg="Start cni network conf syncer for default" Jun 25 14:52:59.447213 containerd[1492]: time="2024-06-25T14:52:59.447210415Z" level=info msg="Start streaming server" Jun 25 14:52:59.448722 containerd[1492]: time="2024-06-25T14:52:59.448696593Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 25 14:52:59.451832 containerd[1492]: time="2024-06-25T14:52:59.451806397Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 25 14:52:59.452048 systemd[1]: Started containerd.service - containerd container runtime. Jun 25 14:52:59.458577 containerd[1492]: time="2024-06-25T14:52:59.458550522Z" level=info msg="containerd successfully booted in 0.138391s" Jun 25 14:52:59.568169 tar[1489]: linux-arm64/LICENSE Jun 25 14:52:59.569050 tar[1489]: linux-arm64/README.md Jun 25 14:52:59.578069 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 25 14:52:59.685740 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:53:00.104432 kubelet[1573]: E0625 14:53:00.104355 1573 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 14:53:00.106678 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 14:53:00.106835 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 14:53:00.342964 sshd_keygen[1486]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 25 14:53:00.361500 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 25 14:53:00.371300 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 25 14:53:00.377532 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jun 25 14:53:00.384015 systemd[1]: issuegen.service: Deactivated successfully. Jun 25 14:53:00.384198 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 25 14:53:00.392746 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 25 14:53:00.402081 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jun 25 14:53:00.408382 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 25 14:53:00.419193 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 25 14:53:00.426241 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jun 25 14:53:00.432407 systemd[1]: Reached target getty.target - Login Prompts. Jun 25 14:53:00.437346 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 25 14:53:00.449088 systemd[1]: Starting systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP... Jun 25 14:53:00.459838 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jun 25 14:53:00.460035 systemd[1]: Finished systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP. Jun 25 14:53:00.467112 systemd[1]: Startup finished in 626ms (kernel) + 11.461s (initrd) + 13.460s (userspace) = 25.549s. Jun 25 14:53:00.672308 login[1598]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Jun 25 14:53:00.672935 login[1597]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 25 14:53:00.680289 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 25 14:53:00.687139 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 25 14:53:00.690281 systemd-logind[1476]: New session 2 of user core. Jun 25 14:53:00.697559 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 25 14:53:00.702262 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 25 14:53:00.719554 (systemd)[1601]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:53:00.811849 systemd[1601]: Queued start job for default target default.target. Jun 25 14:53:00.816168 systemd[1601]: Reached target paths.target - Paths. Jun 25 14:53:00.816189 systemd[1601]: Reached target sockets.target - Sockets. Jun 25 14:53:00.816199 systemd[1601]: Reached target timers.target - Timers. Jun 25 14:53:00.816208 systemd[1601]: Reached target basic.target - Basic System. Jun 25 14:53:00.816253 systemd[1601]: Reached target default.target - Main User Target. Jun 25 14:53:00.816283 systemd[1601]: Startup finished in 90ms. Jun 25 14:53:00.816339 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 25 14:53:00.817538 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 25 14:53:01.673766 login[1598]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 25 14:53:01.678568 systemd-logind[1476]: New session 1 of user core. Jun 25 14:53:01.680936 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 25 14:53:01.884594 waagent[1596]: 2024-06-25T14:53:01.884495Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jun 25 14:53:01.889868 waagent[1596]: 2024-06-25T14:53:01.889806Z INFO Daemon Daemon OS: flatcar 3815.2.4 Jun 25 14:53:01.893983 waagent[1596]: 2024-06-25T14:53:01.893934Z INFO Daemon Daemon Python: 3.11.6 Jun 25 14:53:01.899014 waagent[1596]: 2024-06-25T14:53:01.898907Z INFO Daemon Daemon Run daemon Jun 25 14:53:01.904198 waagent[1596]: 2024-06-25T14:53:01.904146Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3815.2.4' Jun 25 14:53:01.913677 waagent[1596]: 2024-06-25T14:53:01.913603Z INFO Daemon Daemon Using waagent for provisioning Jun 25 14:53:01.918655 waagent[1596]: 2024-06-25T14:53:01.918609Z INFO Daemon Daemon Activate resource disk Jun 25 14:53:01.923078 waagent[1596]: 2024-06-25T14:53:01.923031Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jun 25 14:53:01.933690 waagent[1596]: 2024-06-25T14:53:01.933607Z INFO Daemon Daemon Found device: None Jun 25 14:53:01.937715 waagent[1596]: 2024-06-25T14:53:01.937668Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jun 25 14:53:01.945415 waagent[1596]: 2024-06-25T14:53:01.945369Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jun 25 14:53:01.955950 waagent[1596]: 2024-06-25T14:53:01.955902Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jun 25 14:53:01.961403 waagent[1596]: 2024-06-25T14:53:01.961357Z INFO Daemon Daemon Running default provisioning handler Jun 25 14:53:01.973310 waagent[1596]: 2024-06-25T14:53:01.973243Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Jun 25 14:53:01.986114 waagent[1596]: 2024-06-25T14:53:01.986048Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jun 25 14:53:01.995158 waagent[1596]: 2024-06-25T14:53:01.995101Z INFO Daemon Daemon cloud-init is enabled: False Jun 25 14:53:01.999810 waagent[1596]: 2024-06-25T14:53:01.999751Z INFO Daemon Daemon Copying ovf-env.xml Jun 25 14:53:02.071765 waagent[1596]: 2024-06-25T14:53:02.071668Z INFO Daemon Daemon Successfully mounted dvd Jun 25 14:53:02.103871 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jun 25 14:53:02.120594 waagent[1596]: 2024-06-25T14:53:02.120501Z INFO Daemon Daemon Detect protocol endpoint Jun 25 14:53:02.125392 waagent[1596]: 2024-06-25T14:53:02.125333Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jun 25 14:53:02.130996 waagent[1596]: 2024-06-25T14:53:02.130945Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jun 25 14:53:02.137448 waagent[1596]: 2024-06-25T14:53:02.137403Z INFO Daemon Daemon Test for route to 168.63.129.16 Jun 25 14:53:02.142550 waagent[1596]: 2024-06-25T14:53:02.142503Z INFO Daemon Daemon Route to 168.63.129.16 exists Jun 25 14:53:02.147702 waagent[1596]: 2024-06-25T14:53:02.147655Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jun 25 14:53:02.163439 waagent[1596]: 2024-06-25T14:53:02.163379Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jun 25 14:53:02.170030 waagent[1596]: 2024-06-25T14:53:02.169998Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jun 25 14:53:02.175093 waagent[1596]: 2024-06-25T14:53:02.175046Z INFO Daemon Daemon Server preferred version:2015-04-05 Jun 25 14:53:02.468161 waagent[1596]: 2024-06-25T14:53:02.468063Z INFO Daemon Daemon Initializing goal state during protocol detection Jun 25 14:53:02.474542 waagent[1596]: 2024-06-25T14:53:02.474467Z INFO Daemon Daemon Forcing an update of the goal state. Jun 25 14:53:02.483684 waagent[1596]: 2024-06-25T14:53:02.483632Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jun 25 14:53:02.503112 waagent[1596]: 2024-06-25T14:53:02.503066Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.151 Jun 25 14:53:02.508598 waagent[1596]: 2024-06-25T14:53:02.508547Z INFO Daemon Jun 25 14:53:02.511214 waagent[1596]: 2024-06-25T14:53:02.511168Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: d923c564-e9ae-4fcf-a790-ada628979ad5 eTag: 2303406769642598293 source: Fabric] Jun 25 14:53:02.524111 waagent[1596]: 2024-06-25T14:53:02.524057Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jun 25 14:53:02.530600 waagent[1596]: 2024-06-25T14:53:02.530550Z INFO Daemon Jun 25 14:53:02.533272 waagent[1596]: 2024-06-25T14:53:02.533229Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jun 25 14:53:02.543641 waagent[1596]: 2024-06-25T14:53:02.543600Z INFO Daemon Daemon Downloading artifacts profile blob Jun 25 14:53:02.630934 waagent[1596]: 2024-06-25T14:53:02.630846Z INFO Daemon Downloaded certificate {'thumbprint': '67D25E3EBCF2BFCF695DC7C169367AFE33570229', 'hasPrivateKey': True} Jun 25 14:53:02.645251 waagent[1596]: 2024-06-25T14:53:02.645153Z INFO Daemon Downloaded certificate {'thumbprint': '6759D13DCFE7011558AF17B95098226A3FFC1E2C', 'hasPrivateKey': False} Jun 25 14:53:02.662304 waagent[1596]: 2024-06-25T14:53:02.662185Z INFO Daemon Fetch goal state completed Jun 25 14:53:02.679362 waagent[1596]: 2024-06-25T14:53:02.679237Z INFO Daemon Daemon Starting provisioning Jun 25 14:53:02.687223 waagent[1596]: 2024-06-25T14:53:02.687052Z INFO Daemon Daemon Handle ovf-env.xml. Jun 25 14:53:02.694824 waagent[1596]: 2024-06-25T14:53:02.694699Z INFO Daemon Daemon Set hostname [ci-3815.2.4-a-f605b45a38] Jun 25 14:53:02.738114 waagent[1596]: 2024-06-25T14:53:02.737941Z INFO Daemon Daemon Publish hostname [ci-3815.2.4-a-f605b45a38] Jun 25 14:53:02.748987 waagent[1596]: 2024-06-25T14:53:02.748921Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jun 25 14:53:02.759208 waagent[1596]: 2024-06-25T14:53:02.759048Z INFO Daemon Daemon Primary interface is [eth0] Jun 25 14:53:02.818764 systemd-networkd[1255]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 14:53:02.818814 systemd-networkd[1255]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 14:53:02.818852 systemd-networkd[1255]: eth0: DHCP lease lost Jun 25 14:53:02.820946 waagent[1596]: 2024-06-25T14:53:02.820642Z INFO Daemon Daemon Create user account if not exists Jun 25 14:53:02.833927 waagent[1596]: 2024-06-25T14:53:02.833662Z INFO Daemon Daemon User core already exists, skip useradd Jun 25 14:53:02.842713 systemd-networkd[1255]: eth0: DHCPv6 lease lost Jun 25 14:53:02.843832 waagent[1596]: 2024-06-25T14:53:02.843637Z INFO Daemon Daemon Configure sudoer Jun 25 14:53:02.851667 waagent[1596]: 2024-06-25T14:53:02.851572Z INFO Daemon Daemon Configure sshd Jun 25 14:53:02.859520 waagent[1596]: 2024-06-25T14:53:02.859350Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jun 25 14:53:02.878687 waagent[1596]: 2024-06-25T14:53:02.878534Z INFO Daemon Daemon Deploy ssh public key. Jun 25 14:53:02.887938 systemd-networkd[1255]: eth0: DHCPv4 address 10.200.20.34/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jun 25 14:53:04.162040 waagent[1596]: 2024-06-25T14:53:04.161985Z INFO Daemon Daemon Provisioning complete Jun 25 14:53:04.184739 waagent[1596]: 2024-06-25T14:53:04.184683Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jun 25 14:53:04.191430 waagent[1596]: 2024-06-25T14:53:04.191358Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jun 25 14:53:04.201273 waagent[1596]: 2024-06-25T14:53:04.201201Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jun 25 14:53:04.910966 waagent[1648]: 2024-06-25T14:53:04.636287Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jun 25 14:53:04.910966 waagent[1648]: 2024-06-25T14:53:04.910407Z INFO ExtHandler ExtHandler OS: flatcar 3815.2.4 Jun 25 14:53:04.910966 waagent[1648]: 2024-06-25T14:53:04.910552Z INFO ExtHandler ExtHandler Python: 3.11.6 Jun 25 14:53:05.035172 waagent[1648]: 2024-06-25T14:53:05.035087Z INFO ExtHandler ExtHandler Distro: flatcar-3815.2.4; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.6; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jun 25 14:53:05.035507 waagent[1648]: 2024-06-25T14:53:05.035467Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 25 14:53:05.035673 waagent[1648]: 2024-06-25T14:53:05.035636Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 25 14:53:05.043022 waagent[1648]: 2024-06-25T14:53:05.042965Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jun 25 14:53:05.049706 waagent[1648]: 2024-06-25T14:53:05.049666Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.151 Jun 25 14:53:05.050273 waagent[1648]: 2024-06-25T14:53:05.050231Z INFO ExtHandler Jun 25 14:53:05.050430 waagent[1648]: 2024-06-25T14:53:05.050396Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: cb0beb97-2af4-46c8-b3fc-ecf1619fd321 eTag: 2303406769642598293 source: Fabric] Jun 25 14:53:05.050829 waagent[1648]: 2024-06-25T14:53:05.050769Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jun 25 14:53:05.051522 waagent[1648]: 2024-06-25T14:53:05.051476Z INFO ExtHandler Jun 25 14:53:05.051694 waagent[1648]: 2024-06-25T14:53:05.051652Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jun 25 14:53:05.055324 waagent[1648]: 2024-06-25T14:53:05.055289Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jun 25 14:53:05.138155 waagent[1648]: 2024-06-25T14:53:05.138076Z INFO ExtHandler Downloaded certificate {'thumbprint': '67D25E3EBCF2BFCF695DC7C169367AFE33570229', 'hasPrivateKey': True} Jun 25 14:53:05.138711 waagent[1648]: 2024-06-25T14:53:05.138669Z INFO ExtHandler Downloaded certificate {'thumbprint': '6759D13DCFE7011558AF17B95098226A3FFC1E2C', 'hasPrivateKey': False} Jun 25 14:53:05.139331 waagent[1648]: 2024-06-25T14:53:05.139243Z INFO ExtHandler Fetch goal state completed Jun 25 14:53:05.156707 waagent[1648]: 2024-06-25T14:53:05.156651Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1648 Jun 25 14:53:05.157175 waagent[1648]: 2024-06-25T14:53:05.157133Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jun 25 14:53:05.158917 waagent[1648]: 2024-06-25T14:53:05.158876Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3815.2.4', '', 'Flatcar Container Linux by Kinvolk'] Jun 25 14:53:05.159421 waagent[1648]: 2024-06-25T14:53:05.159379Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jun 25 14:53:05.221856 waagent[1648]: 2024-06-25T14:53:05.221762Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jun 25 14:53:05.222166 waagent[1648]: 2024-06-25T14:53:05.222127Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jun 25 14:53:05.228487 waagent[1648]: 2024-06-25T14:53:05.228458Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jun 25 14:53:05.235346 systemd[1]: Reloading. Jun 25 14:53:05.385753 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 14:53:05.469297 waagent[1648]: 2024-06-25T14:53:05.468943Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jun 25 14:53:05.474341 systemd[1]: Reloading. Jun 25 14:53:05.631448 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 14:53:05.714141 waagent[1648]: 2024-06-25T14:53:05.712414Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jun 25 14:53:05.714141 waagent[1648]: 2024-06-25T14:53:05.712585Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jun 25 14:53:05.978830 waagent[1648]: 2024-06-25T14:53:05.978734Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jun 25 14:53:05.979723 waagent[1648]: 2024-06-25T14:53:05.979674Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jun 25 14:53:05.980701 waagent[1648]: 2024-06-25T14:53:05.980645Z INFO ExtHandler ExtHandler Starting env monitor service. Jun 25 14:53:05.980865 waagent[1648]: 2024-06-25T14:53:05.980793Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 25 14:53:05.981263 waagent[1648]: 2024-06-25T14:53:05.981217Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 25 14:53:05.981487 waagent[1648]: 2024-06-25T14:53:05.981443Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jun 25 14:53:05.981669 waagent[1648]: 2024-06-25T14:53:05.981629Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jun 25 14:53:05.981669 waagent[1648]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jun 25 14:53:05.981669 waagent[1648]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jun 25 14:53:05.981669 waagent[1648]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jun 25 14:53:05.981669 waagent[1648]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jun 25 14:53:05.981669 waagent[1648]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jun 25 14:53:05.981669 waagent[1648]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jun 25 14:53:05.982088 waagent[1648]: 2024-06-25T14:53:05.982049Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jun 25 14:53:05.982536 waagent[1648]: 2024-06-25T14:53:05.982477Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jun 25 14:53:05.982683 waagent[1648]: 2024-06-25T14:53:05.982629Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jun 25 14:53:05.983147 waagent[1648]: 2024-06-25T14:53:05.983094Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jun 25 14:53:05.983316 waagent[1648]: 2024-06-25T14:53:05.983259Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jun 25 14:53:05.983406 waagent[1648]: 2024-06-25T14:53:05.983369Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jun 25 14:53:05.984385 waagent[1648]: 2024-06-25T14:53:05.984336Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 25 14:53:05.985105 waagent[1648]: 2024-06-25T14:53:05.985056Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 25 14:53:05.985550 waagent[1648]: 2024-06-25T14:53:05.985498Z INFO EnvHandler ExtHandler Configure routes Jun 25 14:53:05.985960 waagent[1648]: 2024-06-25T14:53:05.985912Z INFO EnvHandler ExtHandler Gateway:None Jun 25 14:53:05.986450 waagent[1648]: 2024-06-25T14:53:05.986405Z INFO EnvHandler ExtHandler Routes:None Jun 25 14:53:05.989083 waagent[1648]: 2024-06-25T14:53:05.989040Z INFO ExtHandler ExtHandler Jun 25 14:53:05.989645 waagent[1648]: 2024-06-25T14:53:05.989593Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: f6252a9d-018e-430f-bcf4-589a28056c30 correlation 8d98f493-af1a-49a1-b25a-74af06bed7df created: 2024-06-25T14:51:50.767830Z] Jun 25 14:53:05.991153 waagent[1648]: 2024-06-25T14:53:05.991105Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jun 25 14:53:05.993128 waagent[1648]: 2024-06-25T14:53:05.993080Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 4 ms] Jun 25 14:53:06.029512 waagent[1648]: 2024-06-25T14:53:06.029457Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 1B08BB11-341C-47EC-8A50-6BA25A2A2D5E;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jun 25 14:53:06.036995 waagent[1648]: 2024-06-25T14:53:06.036943Z INFO MonitorHandler ExtHandler Network interfaces: Jun 25 14:53:06.036995 waagent[1648]: Executing ['ip', '-a', '-o', 'link']: Jun 25 14:53:06.036995 waagent[1648]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jun 25 14:53:06.036995 waagent[1648]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:ba:d8:11 brd ff:ff:ff:ff:ff:ff Jun 25 14:53:06.036995 waagent[1648]: 3: enP48837s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:ba:d8:11 brd ff:ff:ff:ff:ff:ff\ altname enP48837p0s2 Jun 25 14:53:06.036995 waagent[1648]: Executing ['ip', '-4', '-a', '-o', 'address']: Jun 25 14:53:06.036995 waagent[1648]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jun 25 14:53:06.036995 waagent[1648]: 2: eth0 inet 10.200.20.34/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jun 25 14:53:06.036995 waagent[1648]: Executing ['ip', '-6', '-a', '-o', 'address']: Jun 25 14:53:06.036995 waagent[1648]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Jun 25 14:53:06.036995 waagent[1648]: 2: eth0 inet6 fe80::222:48ff:feba:d811/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jun 25 14:53:06.083938 waagent[1648]: 2024-06-25T14:53:06.083878Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jun 25 14:53:06.083938 waagent[1648]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jun 25 14:53:06.083938 waagent[1648]: pkts bytes target prot opt in out source destination Jun 25 14:53:06.083938 waagent[1648]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jun 25 14:53:06.083938 waagent[1648]: pkts bytes target prot opt in out source destination Jun 25 14:53:06.083938 waagent[1648]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jun 25 14:53:06.083938 waagent[1648]: pkts bytes target prot opt in out source destination Jun 25 14:53:06.083938 waagent[1648]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jun 25 14:53:06.083938 waagent[1648]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jun 25 14:53:06.083938 waagent[1648]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jun 25 14:53:06.087209 waagent[1648]: 2024-06-25T14:53:06.087165Z INFO EnvHandler ExtHandler Current Firewall rules: Jun 25 14:53:06.087209 waagent[1648]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jun 25 14:53:06.087209 waagent[1648]: pkts bytes target prot opt in out source destination Jun 25 14:53:06.087209 waagent[1648]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jun 25 14:53:06.087209 waagent[1648]: pkts bytes target prot opt in out source destination Jun 25 14:53:06.087209 waagent[1648]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jun 25 14:53:06.087209 waagent[1648]: pkts bytes target prot opt in out source destination Jun 25 14:53:06.087209 waagent[1648]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jun 25 14:53:06.087209 waagent[1648]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jun 25 14:53:06.087209 waagent[1648]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jun 25 14:53:06.087726 waagent[1648]: 2024-06-25T14:53:06.087695Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jun 25 14:53:10.357441 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 25 14:53:10.357625 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:53:10.365114 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:53:10.453209 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:53:11.017432 kubelet[1847]: E0625 14:53:11.017385 1847 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 14:53:11.020695 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 14:53:11.020851 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 14:53:21.271705 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 25 14:53:21.271923 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:53:21.279084 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:53:21.407897 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:53:21.910380 kubelet[1857]: E0625 14:53:21.910322 1857 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 14:53:21.912980 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 14:53:21.913113 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 14:53:27.806695 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 25 14:53:27.814192 systemd[1]: Started sshd@0-10.200.20.34:22-10.200.16.10:43352.service - OpenSSH per-connection server daemon (10.200.16.10:43352). Jun 25 14:53:28.289159 sshd[1864]: Accepted publickey for core from 10.200.16.10 port 43352 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:53:28.290403 sshd[1864]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:53:28.295108 systemd-logind[1476]: New session 3 of user core. Jun 25 14:53:28.304027 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 25 14:53:28.686625 systemd[1]: Started sshd@1-10.200.20.34:22-10.200.16.10:43366.service - OpenSSH per-connection server daemon (10.200.16.10:43366). Jun 25 14:53:29.126654 sshd[1869]: Accepted publickey for core from 10.200.16.10 port 43366 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:53:29.128571 sshd[1869]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:53:29.133218 systemd-logind[1476]: New session 4 of user core. Jun 25 14:53:29.139003 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 25 14:53:29.461825 sshd[1869]: pam_unix(sshd:session): session closed for user core Jun 25 14:53:29.464542 systemd[1]: sshd@1-10.200.20.34:22-10.200.16.10:43366.service: Deactivated successfully. Jun 25 14:53:29.465247 systemd[1]: session-4.scope: Deactivated successfully. Jun 25 14:53:29.465833 systemd-logind[1476]: Session 4 logged out. Waiting for processes to exit. Jun 25 14:53:29.466776 systemd-logind[1476]: Removed session 4. Jun 25 14:53:29.546660 systemd[1]: Started sshd@2-10.200.20.34:22-10.200.16.10:43378.service - OpenSSH per-connection server daemon (10.200.16.10:43378). Jun 25 14:53:29.993429 sshd[1875]: Accepted publickey for core from 10.200.16.10 port 43378 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:53:29.994712 sshd[1875]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:53:29.998742 systemd-logind[1476]: New session 5 of user core. Jun 25 14:53:30.009954 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 25 14:53:30.321986 sshd[1875]: pam_unix(sshd:session): session closed for user core Jun 25 14:53:30.324248 systemd[1]: session-5.scope: Deactivated successfully. Jun 25 14:53:30.324843 systemd[1]: sshd@2-10.200.20.34:22-10.200.16.10:43378.service: Deactivated successfully. Jun 25 14:53:30.325636 systemd-logind[1476]: Session 5 logged out. Waiting for processes to exit. Jun 25 14:53:30.326388 systemd-logind[1476]: Removed session 5. Jun 25 14:53:30.396541 systemd[1]: Started sshd@3-10.200.20.34:22-10.200.16.10:43388.service - OpenSSH per-connection server daemon (10.200.16.10:43388). Jun 25 14:53:30.802407 sshd[1881]: Accepted publickey for core from 10.200.16.10 port 43388 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:53:30.804082 sshd[1881]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:53:30.807843 systemd-logind[1476]: New session 6 of user core. Jun 25 14:53:30.817019 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 25 14:53:31.120274 sshd[1881]: pam_unix(sshd:session): session closed for user core Jun 25 14:53:31.123159 systemd[1]: sshd@3-10.200.20.34:22-10.200.16.10:43388.service: Deactivated successfully. Jun 25 14:53:31.123818 systemd[1]: session-6.scope: Deactivated successfully. Jun 25 14:53:31.124372 systemd-logind[1476]: Session 6 logged out. Waiting for processes to exit. Jun 25 14:53:31.125217 systemd-logind[1476]: Removed session 6. Jun 25 14:53:31.200508 systemd[1]: Started sshd@4-10.200.20.34:22-10.200.16.10:43400.service - OpenSSH per-connection server daemon (10.200.16.10:43400). Jun 25 14:53:31.641241 sshd[1887]: Accepted publickey for core from 10.200.16.10 port 43400 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:53:31.642908 sshd[1887]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:53:31.646872 systemd-logind[1476]: New session 7 of user core. Jun 25 14:53:31.651934 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 25 14:53:32.020333 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jun 25 14:53:32.020502 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:53:32.028124 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:53:32.521893 sudo[1890]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 25 14:53:32.522139 sudo[1890]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 14:53:32.793992 sudo[1890]: pam_unix(sudo:session): session closed for user root Jun 25 14:53:32.922905 sshd[1887]: pam_unix(sshd:session): session closed for user core Jun 25 14:53:32.925812 systemd[1]: sshd@4-10.200.20.34:22-10.200.16.10:43400.service: Deactivated successfully. Jun 25 14:53:32.926683 systemd[1]: session-7.scope: Deactivated successfully. Jun 25 14:53:32.930593 systemd-logind[1476]: Session 7 logged out. Waiting for processes to exit. Jun 25 14:53:32.932019 systemd-logind[1476]: Removed session 7. Jun 25 14:53:32.936676 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:53:32.956733 systemd[1]: Started sshd@5-10.200.20.34:22-10.200.16.10:43412.service - OpenSSH per-connection server daemon (10.200.16.10:43412). Jun 25 14:53:32.985105 kubelet[1897]: E0625 14:53:32.985045 1897 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 14:53:32.987228 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 14:53:32.987364 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 14:53:33.397809 sshd[1903]: Accepted publickey for core from 10.200.16.10 port 43412 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:53:33.399482 sshd[1903]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:53:33.403471 systemd-logind[1476]: New session 8 of user core. Jun 25 14:53:33.409983 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 25 14:53:33.651555 sudo[1908]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 25 14:53:33.652097 sudo[1908]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 14:53:33.655203 sudo[1908]: pam_unix(sudo:session): session closed for user root Jun 25 14:53:33.660050 sudo[1907]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jun 25 14:53:33.660612 sudo[1907]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 14:53:33.679119 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jun 25 14:53:33.679000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jun 25 14:53:33.684270 kernel: kauditd_printk_skb: 89 callbacks suppressed Jun 25 14:53:33.684334 kernel: audit: type=1305 audit(1719327213.679:200): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jun 25 14:53:33.684600 auditctl[1911]: No rules Jun 25 14:53:33.685134 systemd[1]: audit-rules.service: Deactivated successfully. Jun 25 14:53:33.685340 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jun 25 14:53:33.695684 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 14:53:33.679000 audit[1911]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffca6d1270 a2=420 a3=0 items=0 ppid=1 pid=1911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:33.719645 kernel: audit: type=1300 audit(1719327213.679:200): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffca6d1270 a2=420 a3=0 items=0 ppid=1 pid=1911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:33.679000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Jun 25 14:53:33.727007 kernel: audit: type=1327 audit(1719327213.679:200): proctitle=2F7362696E2F617564697463746C002D44 Jun 25 14:53:33.684000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:53:33.743199 kernel: audit: type=1131 audit(1719327213.684:201): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:53:33.743425 augenrules[1928]: No rules Jun 25 14:53:33.744170 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 14:53:33.745497 sudo[1907]: pam_unix(sudo:session): session closed for user root Jun 25 14:53:33.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:53:33.744000 audit[1907]: USER_END pid=1907 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:53:33.779647 kernel: audit: type=1130 audit(1719327213.743:202): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:53:33.779732 kernel: audit: type=1106 audit(1719327213.744:203): pid=1907 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:53:33.744000 audit[1907]: CRED_DISP pid=1907 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:53:33.796352 kernel: audit: type=1104 audit(1719327213.744:204): pid=1907 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:53:33.842168 sshd[1903]: pam_unix(sshd:session): session closed for user core Jun 25 14:53:33.842000 audit[1903]: USER_END pid=1903 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:53:33.845456 systemd-logind[1476]: Session 8 logged out. Waiting for processes to exit. Jun 25 14:53:33.846740 systemd[1]: session-8.scope: Deactivated successfully. Jun 25 14:53:33.848032 systemd-logind[1476]: Removed session 8. Jun 25 14:53:33.848675 systemd[1]: sshd@5-10.200.20.34:22-10.200.16.10:43412.service: Deactivated successfully. Jun 25 14:53:33.842000 audit[1903]: CRED_DISP pid=1903 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:53:33.885262 kernel: audit: type=1106 audit(1719327213.842:205): pid=1903 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:53:33.885314 kernel: audit: type=1104 audit(1719327213.842:206): pid=1903 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:53:33.885352 kernel: audit: type=1131 audit(1719327213.847:207): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.20.34:22-10.200.16.10:43412 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:53:33.847000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.20.34:22-10.200.16.10:43412 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:53:33.918655 systemd[1]: Started sshd@6-10.200.20.34:22-10.200.16.10:43414.service - OpenSSH per-connection server daemon (10.200.16.10:43414). Jun 25 14:53:33.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.34:22-10.200.16.10:43414 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:53:34.329000 audit[1934]: USER_ACCT pid=1934 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:53:34.330119 sshd[1934]: Accepted publickey for core from 10.200.16.10 port 43414 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:53:34.330000 audit[1934]: CRED_ACQ pid=1934 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:53:34.330000 audit[1934]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd078e8f0 a2=3 a3=1 items=0 ppid=1 pid=1934 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:34.330000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:53:34.331740 sshd[1934]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:53:34.335843 systemd-logind[1476]: New session 9 of user core. Jun 25 14:53:34.340951 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 25 14:53:34.344000 audit[1934]: USER_START pid=1934 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:53:34.346000 audit[1936]: CRED_ACQ pid=1936 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:53:34.568000 audit[1937]: USER_ACCT pid=1937 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:53:34.569506 sudo[1937]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 25 14:53:34.568000 audit[1937]: CRED_REFR pid=1937 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:53:34.569748 sudo[1937]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 14:53:34.570000 audit[1937]: USER_START pid=1937 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:53:34.919230 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 25 14:53:35.714271 dockerd[1946]: time="2024-06-25T14:53:35.714203437Z" level=info msg="Starting up" Jun 25 14:53:35.749144 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport4007607849-merged.mount: Deactivated successfully. Jun 25 14:53:35.794567 systemd[1]: var-lib-docker-metacopy\x2dcheck2744546107-merged.mount: Deactivated successfully. Jun 25 14:53:35.816082 dockerd[1946]: time="2024-06-25T14:53:35.816042934Z" level=info msg="Loading containers: start." Jun 25 14:53:35.851000 audit[1975]: NETFILTER_CFG table=nat:5 family=2 entries=2 op=nft_register_chain pid=1975 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:53:35.851000 audit[1975]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffec8772f0 a2=0 a3=1 items=0 ppid=1946 pid=1975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:35.851000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jun 25 14:53:35.853000 audit[1977]: NETFILTER_CFG table=filter:6 family=2 entries=2 op=nft_register_chain pid=1977 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:53:35.853000 audit[1977]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffde01ffe0 a2=0 a3=1 items=0 ppid=1946 pid=1977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:35.853000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jun 25 14:53:35.856000 audit[1979]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1979 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:53:35.856000 audit[1979]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffff046780 a2=0 a3=1 items=0 ppid=1946 pid=1979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:35.856000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jun 25 14:53:35.859000 audit[1981]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1981 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:53:35.859000 audit[1981]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffffc860da0 a2=0 a3=1 items=0 ppid=1946 pid=1981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:35.859000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jun 25 14:53:35.861000 audit[1983]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1983 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:53:35.861000 audit[1983]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffd3a9f510 a2=0 a3=1 items=0 ppid=1946 pid=1983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:35.861000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Jun 25 14:53:35.863000 audit[1985]: NETFILTER_CFG table=filter:10 family=2 entries=1 op=nft_register_rule pid=1985 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:53:35.863000 audit[1985]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffec4a8ab0 a2=0 a3=1 items=0 ppid=1946 pid=1985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:35.863000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Jun 25 14:53:35.891000 audit[1987]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_register_chain pid=1987 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:53:35.891000 audit[1987]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc6688820 a2=0 a3=1 items=0 ppid=1946 pid=1987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:35.891000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jun 25 14:53:35.893000 audit[1989]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1989 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:53:35.893000 audit[1989]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=fffff2c12550 a2=0 a3=1 items=0 ppid=1946 pid=1989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:35.893000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jun 25 14:53:35.895000 audit[1991]: NETFILTER_CFG table=filter:13 family=2 entries=2 op=nft_register_chain pid=1991 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:53:35.895000 audit[1991]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=308 a0=3 a1=ffffebffdc10 a2=0 a3=1 items=0 ppid=1946 pid=1991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:35.895000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 14:53:35.912000 audit[1995]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_unregister_rule pid=1995 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:53:35.912000 audit[1995]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffc1dcff90 a2=0 a3=1 items=0 ppid=1946 pid=1995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:35.912000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jun 25 14:53:35.913000 audit[1996]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1996 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:53:35.913000 audit[1996]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffe3711ef0 a2=0 a3=1 items=0 ppid=1946 pid=1996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:35.913000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 14:53:35.940821 kernel: Initializing XFRM netlink socket Jun 25 14:53:36.024000 audit[2004]: NETFILTER_CFG table=nat:16 family=2 entries=2 op=nft_register_chain pid=2004 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:53:36.024000 audit[2004]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=492 a0=3 a1=fffff55d4c20 a2=0 a3=1 items=0 ppid=1946 pid=2004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:36.024000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jun 25 14:53:36.033000 audit[2007]: NETFILTER_CFG table=nat:17 family=2 entries=1 op=nft_register_rule pid=2007 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:53:36.033000 audit[2007]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffddd13ec0 a2=0 a3=1 items=0 ppid=1946 pid=2007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:36.033000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jun 25 14:53:36.037000 audit[2011]: NETFILTER_CFG table=filter:18 family=2 entries=1 op=nft_register_rule pid=2011 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:53:36.037000 audit[2011]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffcd513680 a2=0 a3=1 items=0 ppid=1946 pid=2011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:36.037000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Jun 25 14:53:36.039000 audit[2013]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=2013 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:53:36.039000 audit[2013]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffdf10add0 a2=0 a3=1 items=0 ppid=1946 pid=2013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:36.039000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Jun 25 14:53:36.041000 audit[2015]: NETFILTER_CFG table=nat:20 family=2 entries=2 op=nft_register_chain pid=2015 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:53:36.041000 audit[2015]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=356 a0=3 a1=ffffcdf9fc80 a2=0 a3=1 items=0 ppid=1946 pid=2015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:36.041000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jun 25 14:53:36.043000 audit[2017]: NETFILTER_CFG table=nat:21 family=2 entries=2 op=nft_register_chain pid=2017 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:53:36.043000 audit[2017]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=444 a0=3 a1=ffffef9545a0 a2=0 a3=1 items=0 ppid=1946 pid=2017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:36.043000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jun 25 14:53:36.045000 audit[2019]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=2019 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:53:36.045000 audit[2019]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=304 a0=3 a1=fffffc6f44a0 a2=0 a3=1 items=0 ppid=1946 pid=2019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:36.045000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Jun 25 14:53:36.048000 audit[2021]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=2021 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:53:36.048000 audit[2021]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=508 a0=3 a1=ffffefc27440 a2=0 a3=1 items=0 ppid=1946 pid=2021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:36.048000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jun 25 14:53:36.050000 audit[2023]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_register_rule pid=2023 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:53:36.050000 audit[2023]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=240 a0=3 a1=ffffd284c440 a2=0 a3=1 items=0 ppid=1946 pid=2023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:36.050000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jun 25 14:53:36.052000 audit[2025]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=2025 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:53:36.052000 audit[2025]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffd1ed95a0 a2=0 a3=1 items=0 ppid=1946 pid=2025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:36.052000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jun 25 14:53:36.055000 audit[2027]: NETFILTER_CFG table=filter:26 family=2 entries=1 op=nft_register_rule pid=2027 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:53:36.055000 audit[2027]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=fffffa1ff930 a2=0 a3=1 items=0 ppid=1946 pid=2027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:36.055000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jun 25 14:53:36.056490 systemd-networkd[1255]: docker0: Link UP Jun 25 14:53:36.075000 audit[2031]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_unregister_rule pid=2031 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:53:36.075000 audit[2031]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffe5587270 a2=0 a3=1 items=0 ppid=1946 pid=2031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:36.075000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jun 25 14:53:36.076000 audit[2032]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_rule pid=2032 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:53:36.076000 audit[2032]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffe7657640 a2=0 a3=1 items=0 ppid=1946 pid=2032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:36.076000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 14:53:36.078195 dockerd[1946]: time="2024-06-25T14:53:36.078164355Z" level=info msg="Loading containers: done." Jun 25 14:53:36.480682 dockerd[1946]: time="2024-06-25T14:53:36.480574909Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 25 14:53:36.481609 dockerd[1946]: time="2024-06-25T14:53:36.481583995Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jun 25 14:53:36.481892 dockerd[1946]: time="2024-06-25T14:53:36.481876237Z" level=info msg="Daemon has completed initialization" Jun 25 14:53:36.522968 dockerd[1946]: time="2024-06-25T14:53:36.522901085Z" level=info msg="API listen on /run/docker.sock" Jun 25 14:53:36.523299 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 25 14:53:36.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:53:36.746929 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2579426953-merged.mount: Deactivated successfully. Jun 25 14:53:37.731921 containerd[1492]: time="2024-06-25T14:53:37.731793843Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\"" Jun 25 14:53:38.592477 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1612642014.mount: Deactivated successfully. Jun 25 14:53:40.145047 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jun 25 14:53:40.371174 containerd[1492]: time="2024-06-25T14:53:40.371124921Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:53:40.375064 containerd[1492]: time="2024-06-25T14:53:40.375008019Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.11: active requests=0, bytes read=31671538" Jun 25 14:53:40.378621 containerd[1492]: time="2024-06-25T14:53:40.378582116Z" level=info msg="ImageCreate event name:\"sha256:d2b5500cdb8d455434ebcaa569918eb0c5e68e82d75d4c85c509519786f24a8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:53:40.382007 containerd[1492]: time="2024-06-25T14:53:40.381955892Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:53:40.386228 containerd[1492]: time="2024-06-25T14:53:40.386171191Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:53:40.387602 containerd[1492]: time="2024-06-25T14:53:40.387546518Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.11\" with image id \"sha256:d2b5500cdb8d455434ebcaa569918eb0c5e68e82d75d4c85c509519786f24a8d\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\", size \"31668338\" in 2.655626874s" Jun 25 14:53:40.387678 containerd[1492]: time="2024-06-25T14:53:40.387599918Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\" returns image reference \"sha256:d2b5500cdb8d455434ebcaa569918eb0c5e68e82d75d4c85c509519786f24a8d\"" Jun 25 14:53:40.408222 containerd[1492]: time="2024-06-25T14:53:40.408092374Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\"" Jun 25 14:53:42.480554 containerd[1492]: time="2024-06-25T14:53:42.480506736Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:53:42.483229 containerd[1492]: time="2024-06-25T14:53:42.483198067Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.11: active requests=0, bytes read=28893118" Jun 25 14:53:42.488656 containerd[1492]: time="2024-06-25T14:53:42.488629329Z" level=info msg="ImageCreate event name:\"sha256:24cd2c3bd254238005fcc2fcc15e9e56347b218c10b8399a28d1bf813800266a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:53:42.493897 containerd[1492]: time="2024-06-25T14:53:42.493856311Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:53:42.498748 containerd[1492]: time="2024-06-25T14:53:42.498694771Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:53:42.500173 containerd[1492]: time="2024-06-25T14:53:42.500132137Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.11\" with image id \"sha256:24cd2c3bd254238005fcc2fcc15e9e56347b218c10b8399a28d1bf813800266a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\", size \"30445463\" in 2.091981242s" Jun 25 14:53:42.500243 containerd[1492]: time="2024-06-25T14:53:42.500178497Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\" returns image reference \"sha256:24cd2c3bd254238005fcc2fcc15e9e56347b218c10b8399a28d1bf813800266a\"" Jun 25 14:53:42.519082 containerd[1492]: time="2024-06-25T14:53:42.519038254Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\"" Jun 25 14:53:43.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:53:43.020298 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jun 25 14:53:43.059225 kernel: kauditd_printk_skb: 84 callbacks suppressed Jun 25 14:53:43.059257 kernel: audit: type=1130 audit(1719327223.019:242): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:53:43.059280 kernel: audit: type=1131 audit(1719327223.019:243): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:53:43.019000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:53:43.020487 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:53:43.058174 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:53:43.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:53:43.635965 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:53:43.654842 kernel: audit: type=1130 audit(1719327223.635:244): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:53:43.724667 kubelet[2145]: E0625 14:53:43.724596 2145 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 14:53:43.726000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 14:53:43.727194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 14:53:43.727321 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 14:53:43.743869 kernel: audit: type=1131 audit(1719327223.726:245): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 14:53:43.920337 update_engine[1479]: I0625 14:53:43.919855 1479 update_attempter.cc:509] Updating boot flags... Jun 25 14:53:44.059846 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (2164) Jun 25 14:53:44.112851 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (2162) Jun 25 14:53:44.203858 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (2162) Jun 25 14:53:44.737417 containerd[1492]: time="2024-06-25T14:53:44.737370863Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:53:44.739339 containerd[1492]: time="2024-06-25T14:53:44.739307710Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.11: active requests=0, bytes read=15358438" Jun 25 14:53:44.743143 containerd[1492]: time="2024-06-25T14:53:44.743117084Z" level=info msg="ImageCreate event name:\"sha256:fdf13db9a96001adee7d1c69fd6849d6cd45fc3c138c95c8240d353eb79acf50\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:53:44.758364 containerd[1492]: time="2024-06-25T14:53:44.758333699Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:53:44.762364 containerd[1492]: time="2024-06-25T14:53:44.762320433Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:53:44.763577 containerd[1492]: time="2024-06-25T14:53:44.763539918Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.11\" with image id \"sha256:fdf13db9a96001adee7d1c69fd6849d6cd45fc3c138c95c8240d353eb79acf50\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\", size \"16910801\" in 2.244269543s" Jun 25 14:53:44.763694 containerd[1492]: time="2024-06-25T14:53:44.763674998Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\" returns image reference \"sha256:fdf13db9a96001adee7d1c69fd6849d6cd45fc3c138c95c8240d353eb79acf50\"" Jun 25 14:53:44.783490 containerd[1492]: time="2024-06-25T14:53:44.783440830Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\"" Jun 25 14:53:46.171354 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2222463494.mount: Deactivated successfully. Jun 25 14:53:47.681293 containerd[1492]: time="2024-06-25T14:53:47.681233518Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:53:47.685231 containerd[1492]: time="2024-06-25T14:53:47.685185609Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.11: active requests=0, bytes read=24772461" Jun 25 14:53:47.687858 containerd[1492]: time="2024-06-25T14:53:47.687825457Z" level=info msg="ImageCreate event name:\"sha256:e195d3cf134bc9d64104f5e82e95fce811d55b1cdc9cb26fb8f52c8d107d1661\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:53:47.696146 containerd[1492]: time="2024-06-25T14:53:47.696111842Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:53:47.701335 containerd[1492]: time="2024-06-25T14:53:47.701303337Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:53:47.701973 containerd[1492]: time="2024-06-25T14:53:47.701930979Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.11\" with image id \"sha256:e195d3cf134bc9d64104f5e82e95fce811d55b1cdc9cb26fb8f52c8d107d1661\", repo tag \"registry.k8s.io/kube-proxy:v1.28.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\", size \"24771480\" in 2.918444269s" Jun 25 14:53:47.701973 containerd[1492]: time="2024-06-25T14:53:47.701970739Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\" returns image reference \"sha256:e195d3cf134bc9d64104f5e82e95fce811d55b1cdc9cb26fb8f52c8d107d1661\"" Jun 25 14:53:47.724047 containerd[1492]: time="2024-06-25T14:53:47.724000205Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jun 25 14:53:48.373971 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4181737272.mount: Deactivated successfully. Jun 25 14:53:48.399103 containerd[1492]: time="2024-06-25T14:53:48.399057099Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:53:48.403035 containerd[1492]: time="2024-06-25T14:53:48.402983910Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Jun 25 14:53:48.406454 containerd[1492]: time="2024-06-25T14:53:48.406429719Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:53:48.411178 containerd[1492]: time="2024-06-25T14:53:48.411147053Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:53:48.414913 containerd[1492]: time="2024-06-25T14:53:48.414870783Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:53:48.415826 containerd[1492]: time="2024-06-25T14:53:48.415770385Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 691.7233ms" Jun 25 14:53:48.415936 containerd[1492]: time="2024-06-25T14:53:48.415916506Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jun 25 14:53:48.437560 containerd[1492]: time="2024-06-25T14:53:48.437505806Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jun 25 14:53:49.143823 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2943007157.mount: Deactivated successfully. Jun 25 14:53:53.770388 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jun 25 14:53:53.807115 kernel: audit: type=1130 audit(1719327233.769:246): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:53:53.807163 kernel: audit: type=1131 audit(1719327233.769:247): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:53:53.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:53:53.769000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:53:53.770617 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:53:53.807151 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:53:53.893956 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:53:53.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:53:53.912879 kernel: audit: type=1130 audit(1719327233.893:248): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:53:54.625536 kubelet[2323]: E0625 14:53:54.625484 2323 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 14:53:54.627732 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 14:53:54.627889 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 14:53:54.627000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 14:53:54.645806 kernel: audit: type=1131 audit(1719327234.627:249): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 14:53:55.336602 containerd[1492]: time="2024-06-25T14:53:55.336550731Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:53:55.338640 containerd[1492]: time="2024-06-25T14:53:55.338605935Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200786" Jun 25 14:53:55.344269 containerd[1492]: time="2024-06-25T14:53:55.344237745Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:53:55.347958 containerd[1492]: time="2024-06-25T14:53:55.347924951Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:53:55.352579 containerd[1492]: time="2024-06-25T14:53:55.352536840Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:53:55.355000 containerd[1492]: time="2024-06-25T14:53:55.354956124Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 6.917401838s" Jun 25 14:53:55.355145 containerd[1492]: time="2024-06-25T14:53:55.355118244Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Jun 25 14:53:55.377553 containerd[1492]: time="2024-06-25T14:53:55.377505924Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Jun 25 14:53:56.098551 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1212620154.mount: Deactivated successfully. Jun 25 14:53:56.719863 containerd[1492]: time="2024-06-25T14:53:56.719807787Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:53:56.739922 containerd[1492]: time="2024-06-25T14:53:56.739851701Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=14558462" Jun 25 14:53:56.758512 containerd[1492]: time="2024-06-25T14:53:56.758472132Z" level=info msg="ImageCreate event name:\"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:53:56.766315 containerd[1492]: time="2024-06-25T14:53:56.766278385Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:53:56.772391 containerd[1492]: time="2024-06-25T14:53:56.772354275Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:53:56.773355 containerd[1492]: time="2024-06-25T14:53:56.773316396Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"14557471\" in 1.395541712s" Jun 25 14:53:56.773496 containerd[1492]: time="2024-06-25T14:53:56.773475357Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\"" Jun 25 14:54:01.559829 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:54:01.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:01.559000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:01.592016 kernel: audit: type=1130 audit(1719327241.559:250): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:01.592080 kernel: audit: type=1131 audit(1719327241.559:251): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:01.593472 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:54:01.614641 systemd[1]: Reloading. Jun 25 14:54:01.783608 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 14:54:02.263564 kernel: audit: type=1334 audit(1719327241.855:252): prog-id=69 op=LOAD Jun 25 14:54:02.263644 kernel: audit: type=1334 audit(1719327241.855:253): prog-id=55 op=UNLOAD Jun 25 14:54:02.263669 kernel: audit: type=1334 audit(1719327241.860:254): prog-id=70 op=LOAD Jun 25 14:54:02.263687 kernel: audit: type=1334 audit(1719327241.861:255): prog-id=71 op=LOAD Jun 25 14:54:02.263704 kernel: audit: type=1334 audit(1719327241.861:256): prog-id=56 op=UNLOAD Jun 25 14:54:02.263721 kernel: audit: type=1334 audit(1719327241.861:257): prog-id=57 op=UNLOAD Jun 25 14:54:02.263737 kernel: audit: type=1334 audit(1719327241.866:258): prog-id=72 op=LOAD Jun 25 14:54:02.263754 kernel: audit: type=1334 audit(1719327241.867:259): prog-id=73 op=LOAD Jun 25 14:54:01.855000 audit: BPF prog-id=69 op=LOAD Jun 25 14:54:01.855000 audit: BPF prog-id=55 op=UNLOAD Jun 25 14:54:01.860000 audit: BPF prog-id=70 op=LOAD Jun 25 14:54:01.861000 audit: BPF prog-id=71 op=LOAD Jun 25 14:54:01.861000 audit: BPF prog-id=56 op=UNLOAD Jun 25 14:54:01.861000 audit: BPF prog-id=57 op=UNLOAD Jun 25 14:54:01.866000 audit: BPF prog-id=72 op=LOAD Jun 25 14:54:01.867000 audit: BPF prog-id=73 op=LOAD Jun 25 14:54:01.867000 audit: BPF prog-id=58 op=UNLOAD Jun 25 14:54:01.867000 audit: BPF prog-id=59 op=UNLOAD Jun 25 14:54:01.871000 audit: BPF prog-id=74 op=LOAD Jun 25 14:54:01.871000 audit: BPF prog-id=60 op=UNLOAD Jun 25 14:54:01.872000 audit: BPF prog-id=75 op=LOAD Jun 25 14:54:01.877000 audit: BPF prog-id=76 op=LOAD Jun 25 14:54:01.877000 audit: BPF prog-id=61 op=UNLOAD Jun 25 14:54:01.877000 audit: BPF prog-id=62 op=UNLOAD Jun 25 14:54:01.883000 audit: BPF prog-id=77 op=LOAD Jun 25 14:54:01.883000 audit: BPF prog-id=63 op=UNLOAD Jun 25 14:54:01.884000 audit: BPF prog-id=78 op=LOAD Jun 25 14:54:01.884000 audit: BPF prog-id=64 op=UNLOAD Jun 25 14:54:01.889000 audit: BPF prog-id=79 op=LOAD Jun 25 14:54:01.889000 audit: BPF prog-id=80 op=LOAD Jun 25 14:54:01.889000 audit: BPF prog-id=65 op=UNLOAD Jun 25 14:54:01.889000 audit: BPF prog-id=66 op=UNLOAD Jun 25 14:54:01.894000 audit: BPF prog-id=81 op=LOAD Jun 25 14:54:01.894000 audit: BPF prog-id=67 op=UNLOAD Jun 25 14:54:01.899000 audit: BPF prog-id=82 op=LOAD Jun 25 14:54:01.899000 audit: BPF prog-id=68 op=UNLOAD Jun 25 14:54:02.264473 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 25 14:54:02.264560 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 25 14:54:02.264959 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:54:02.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 14:54:02.270808 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:54:02.495649 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:54:02.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:02.537400 kubelet[2483]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 14:54:02.537400 kubelet[2483]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 14:54:02.537400 kubelet[2483]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 14:54:02.537771 kubelet[2483]: I0625 14:54:02.537444 2483 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 14:54:03.755760 kubelet[2483]: I0625 14:54:03.755724 2483 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jun 25 14:54:03.755760 kubelet[2483]: I0625 14:54:03.755753 2483 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 14:54:03.756103 kubelet[2483]: I0625 14:54:03.755965 2483 server.go:895] "Client rotation is on, will bootstrap in background" Jun 25 14:54:03.771193 kubelet[2483]: I0625 14:54:03.771160 2483 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 14:54:03.774634 kubelet[2483]: E0625 14:54:03.774613 2483 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.34:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.34:6443: connect: connection refused Jun 25 14:54:03.780201 kubelet[2483]: W0625 14:54:03.780168 2483 machine.go:65] Cannot read vendor id correctly, set empty. Jun 25 14:54:03.781074 kubelet[2483]: I0625 14:54:03.781043 2483 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 14:54:03.781356 kubelet[2483]: I0625 14:54:03.781336 2483 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 14:54:03.781627 kubelet[2483]: I0625 14:54:03.781601 2483 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 14:54:03.781736 kubelet[2483]: I0625 14:54:03.781635 2483 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 14:54:03.781736 kubelet[2483]: I0625 14:54:03.781644 2483 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 14:54:03.781809 kubelet[2483]: I0625 14:54:03.781759 2483 state_mem.go:36] "Initialized new in-memory state store" Jun 25 14:54:03.783586 kubelet[2483]: I0625 14:54:03.783555 2483 kubelet.go:393] "Attempting to sync node with API server" Jun 25 14:54:03.783586 kubelet[2483]: I0625 14:54:03.783585 2483 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 14:54:03.783697 kubelet[2483]: I0625 14:54:03.783613 2483 kubelet.go:309] "Adding apiserver pod source" Jun 25 14:54:03.783697 kubelet[2483]: I0625 14:54:03.783624 2483 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 14:54:03.788624 kubelet[2483]: I0625 14:54:03.788594 2483 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.13" apiVersion="v1" Jun 25 14:54:03.789871 kubelet[2483]: W0625 14:54:03.789849 2483 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 25 14:54:03.790300 kubelet[2483]: I0625 14:54:03.790275 2483 server.go:1232] "Started kubelet" Jun 25 14:54:03.790463 kubelet[2483]: W0625 14:54:03.790381 2483 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.20.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3815.2.4-a-f605b45a38&limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Jun 25 14:54:03.790505 kubelet[2483]: E0625 14:54:03.790478 2483 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3815.2.4-a-f605b45a38&limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Jun 25 14:54:03.790558 kubelet[2483]: W0625 14:54:03.790530 2483 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.20.34:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Jun 25 14:54:03.790594 kubelet[2483]: E0625 14:54:03.790560 2483 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.34:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Jun 25 14:54:03.792105 kubelet[2483]: I0625 14:54:03.792085 2483 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 14:54:03.792947 kubelet[2483]: I0625 14:54:03.792929 2483 server.go:462] "Adding debug handlers to kubelet server" Jun 25 14:54:03.794636 kubelet[2483]: E0625 14:54:03.794516 2483 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3815.2.4-a-f605b45a38.17dc4704d2c5028b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3815.2.4-a-f605b45a38", UID:"ci-3815.2.4-a-f605b45a38", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3815.2.4-a-f605b45a38"}, FirstTimestamp:time.Date(2024, time.June, 25, 14, 54, 3, 790254731, time.Local), LastTimestamp:time.Date(2024, time.June, 25, 14, 54, 3, 790254731, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3815.2.4-a-f605b45a38"}': 'Post "https://10.200.20.34:6443/api/v1/namespaces/default/events": dial tcp 10.200.20.34:6443: connect: connection refused'(may retry after sleeping) Jun 25 14:54:03.794756 kubelet[2483]: E0625 14:54:03.794721 2483 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jun 25 14:54:03.794756 kubelet[2483]: E0625 14:54:03.794738 2483 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 14:54:03.795575 kubelet[2483]: I0625 14:54:03.795553 2483 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 14:54:03.795990 kubelet[2483]: I0625 14:54:03.792090 2483 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jun 25 14:54:03.796328 kubelet[2483]: I0625 14:54:03.796208 2483 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 14:54:03.798601 kubelet[2483]: I0625 14:54:03.798562 2483 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 14:54:03.798672 kubelet[2483]: I0625 14:54:03.798654 2483 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 14:54:03.798717 kubelet[2483]: I0625 14:54:03.798699 2483 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 14:54:03.799002 kubelet[2483]: W0625 14:54:03.798961 2483 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.20.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Jun 25 14:54:03.799002 kubelet[2483]: E0625 14:54:03.799004 2483 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Jun 25 14:54:03.799442 kubelet[2483]: E0625 14:54:03.799411 2483 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3815.2.4-a-f605b45a38?timeout=10s\": dial tcp 10.200.20.34:6443: connect: connection refused" interval="200ms" Jun 25 14:54:03.799000 audit[2493]: NETFILTER_CFG table=mangle:29 family=2 entries=2 op=nft_register_chain pid=2493 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:54:03.799000 audit[2493]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffec4bb160 a2=0 a3=1 items=0 ppid=2483 pid=2493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:03.799000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jun 25 14:54:03.800000 audit[2494]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_chain pid=2494 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:54:03.800000 audit[2494]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffec162620 a2=0 a3=1 items=0 ppid=2483 pid=2494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:03.800000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jun 25 14:54:03.802000 audit[2496]: NETFILTER_CFG table=filter:31 family=2 entries=2 op=nft_register_chain pid=2496 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:54:03.802000 audit[2496]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffc581fd10 a2=0 a3=1 items=0 ppid=2483 pid=2496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:03.802000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 14:54:03.804000 audit[2498]: NETFILTER_CFG table=filter:32 family=2 entries=2 op=nft_register_chain pid=2498 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:54:03.804000 audit[2498]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=fffff97a46b0 a2=0 a3=1 items=0 ppid=2483 pid=2498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:03.804000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 14:54:03.843000 audit[2505]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=2505 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:54:03.843000 audit[2505]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=fffff7a36860 a2=0 a3=1 items=0 ppid=2483 pid=2505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:03.843000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jun 25 14:54:03.844223 kubelet[2483]: I0625 14:54:03.844193 2483 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 14:54:03.844000 audit[2506]: NETFILTER_CFG table=mangle:34 family=10 entries=2 op=nft_register_chain pid=2506 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:54:03.844000 audit[2506]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffffc7ceab0 a2=0 a3=1 items=0 ppid=2483 pid=2506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:03.844000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jun 25 14:54:03.844000 audit[2507]: NETFILTER_CFG table=mangle:35 family=2 entries=1 op=nft_register_chain pid=2507 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:54:03.844000 audit[2507]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe6078bd0 a2=0 a3=1 items=0 ppid=2483 pid=2507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:03.844000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jun 25 14:54:03.845979 kubelet[2483]: I0625 14:54:03.845960 2483 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 14:54:03.846080 kubelet[2483]: I0625 14:54:03.846068 2483 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 14:54:03.846147 kubelet[2483]: I0625 14:54:03.846138 2483 kubelet.go:2303] "Starting kubelet main sync loop" Jun 25 14:54:03.846747 kubelet[2483]: E0625 14:54:03.846712 2483 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 14:54:03.846000 audit[2511]: NETFILTER_CFG table=mangle:36 family=10 entries=1 op=nft_register_chain pid=2511 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:54:03.846000 audit[2511]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd7675590 a2=0 a3=1 items=0 ppid=2483 pid=2511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:03.846000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jun 25 14:54:03.847357 kubelet[2483]: W0625 14:54:03.847256 2483 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.20.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Jun 25 14:54:03.847357 kubelet[2483]: E0625 14:54:03.847308 2483 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Jun 25 14:54:03.847000 audit[2510]: NETFILTER_CFG table=nat:37 family=2 entries=1 op=nft_register_chain pid=2510 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:54:03.847000 audit[2510]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff3322b00 a2=0 a3=1 items=0 ppid=2483 pid=2510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:03.847000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jun 25 14:54:03.848000 audit[2512]: NETFILTER_CFG table=nat:38 family=10 entries=2 op=nft_register_chain pid=2512 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:54:03.848000 audit[2512]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=128 a0=3 a1=ffffc0b4a030 a2=0 a3=1 items=0 ppid=2483 pid=2512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:03.848000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jun 25 14:54:03.848000 audit[2513]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_chain pid=2513 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:54:03.848000 audit[2513]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffffdf6850 a2=0 a3=1 items=0 ppid=2483 pid=2513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:03.848000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jun 25 14:54:03.849000 audit[2514]: NETFILTER_CFG table=filter:40 family=10 entries=2 op=nft_register_chain pid=2514 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:54:03.849000 audit[2514]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffd02af8d0 a2=0 a3=1 items=0 ppid=2483 pid=2514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:03.849000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jun 25 14:54:03.864356 kubelet[2483]: I0625 14:54:03.864333 2483 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 14:54:03.864486 kubelet[2483]: I0625 14:54:03.864474 2483 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 14:54:03.864568 kubelet[2483]: I0625 14:54:03.864559 2483 state_mem.go:36] "Initialized new in-memory state store" Jun 25 14:54:03.873970 kubelet[2483]: I0625 14:54:03.873942 2483 policy_none.go:49] "None policy: Start" Jun 25 14:54:03.874898 kubelet[2483]: I0625 14:54:03.874873 2483 memory_manager.go:169] "Starting memorymanager" policy="None" Jun 25 14:54:03.875003 kubelet[2483]: I0625 14:54:03.874992 2483 state_mem.go:35] "Initializing new in-memory state store" Jun 25 14:54:03.894265 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 25 14:54:03.900252 kubelet[2483]: I0625 14:54:03.900171 2483 kubelet_node_status.go:70] "Attempting to register node" node="ci-3815.2.4-a-f605b45a38" Jun 25 14:54:03.900648 kubelet[2483]: E0625 14:54:03.900539 2483 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.34:6443/api/v1/nodes\": dial tcp 10.200.20.34:6443: connect: connection refused" node="ci-3815.2.4-a-f605b45a38" Jun 25 14:54:03.903316 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 25 14:54:03.905960 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 25 14:54:03.916614 kubelet[2483]: I0625 14:54:03.916582 2483 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 14:54:03.918464 kubelet[2483]: I0625 14:54:03.918425 2483 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 14:54:03.919622 kubelet[2483]: E0625 14:54:03.919324 2483 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3815.2.4-a-f605b45a38\" not found" Jun 25 14:54:03.947802 kubelet[2483]: I0625 14:54:03.947757 2483 topology_manager.go:215] "Topology Admit Handler" podUID="5d6dfbc432e8332f9f9569b1d8ef77a3" podNamespace="kube-system" podName="kube-apiserver-ci-3815.2.4-a-f605b45a38" Jun 25 14:54:03.949429 kubelet[2483]: I0625 14:54:03.949410 2483 topology_manager.go:215] "Topology Admit Handler" podUID="974a83ab3ac7fb6bccd14f63f6b3ce5c" podNamespace="kube-system" podName="kube-controller-manager-ci-3815.2.4-a-f605b45a38" Jun 25 14:54:03.951351 kubelet[2483]: I0625 14:54:03.951079 2483 topology_manager.go:215] "Topology Admit Handler" podUID="f86b874b003ea5b31ff48bbf2cb73c0a" podNamespace="kube-system" podName="kube-scheduler-ci-3815.2.4-a-f605b45a38" Jun 25 14:54:03.955628 systemd[1]: Created slice kubepods-burstable-pod5d6dfbc432e8332f9f9569b1d8ef77a3.slice - libcontainer container kubepods-burstable-pod5d6dfbc432e8332f9f9569b1d8ef77a3.slice. Jun 25 14:54:03.969609 systemd[1]: Created slice kubepods-burstable-pod974a83ab3ac7fb6bccd14f63f6b3ce5c.slice - libcontainer container kubepods-burstable-pod974a83ab3ac7fb6bccd14f63f6b3ce5c.slice. Jun 25 14:54:03.982173 systemd[1]: Created slice kubepods-burstable-podf86b874b003ea5b31ff48bbf2cb73c0a.slice - libcontainer container kubepods-burstable-podf86b874b003ea5b31ff48bbf2cb73c0a.slice. Jun 25 14:54:04.000665 kubelet[2483]: E0625 14:54:04.000629 2483 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3815.2.4-a-f605b45a38?timeout=10s\": dial tcp 10.200.20.34:6443: connect: connection refused" interval="400ms" Jun 25 14:54:04.100115 kubelet[2483]: I0625 14:54:04.100081 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/974a83ab3ac7fb6bccd14f63f6b3ce5c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3815.2.4-a-f605b45a38\" (UID: \"974a83ab3ac7fb6bccd14f63f6b3ce5c\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-a-f605b45a38" Jun 25 14:54:04.100339 kubelet[2483]: I0625 14:54:04.100325 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f86b874b003ea5b31ff48bbf2cb73c0a-kubeconfig\") pod \"kube-scheduler-ci-3815.2.4-a-f605b45a38\" (UID: \"f86b874b003ea5b31ff48bbf2cb73c0a\") " pod="kube-system/kube-scheduler-ci-3815.2.4-a-f605b45a38" Jun 25 14:54:04.100430 kubelet[2483]: I0625 14:54:04.100420 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5d6dfbc432e8332f9f9569b1d8ef77a3-k8s-certs\") pod \"kube-apiserver-ci-3815.2.4-a-f605b45a38\" (UID: \"5d6dfbc432e8332f9f9569b1d8ef77a3\") " pod="kube-system/kube-apiserver-ci-3815.2.4-a-f605b45a38" Jun 25 14:54:04.100515 kubelet[2483]: I0625 14:54:04.100505 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/974a83ab3ac7fb6bccd14f63f6b3ce5c-ca-certs\") pod \"kube-controller-manager-ci-3815.2.4-a-f605b45a38\" (UID: \"974a83ab3ac7fb6bccd14f63f6b3ce5c\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-a-f605b45a38" Jun 25 14:54:04.100604 kubelet[2483]: I0625 14:54:04.100595 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/974a83ab3ac7fb6bccd14f63f6b3ce5c-flexvolume-dir\") pod \"kube-controller-manager-ci-3815.2.4-a-f605b45a38\" (UID: \"974a83ab3ac7fb6bccd14f63f6b3ce5c\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-a-f605b45a38" Jun 25 14:54:04.100838 kubelet[2483]: I0625 14:54:04.100675 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/974a83ab3ac7fb6bccd14f63f6b3ce5c-k8s-certs\") pod \"kube-controller-manager-ci-3815.2.4-a-f605b45a38\" (UID: \"974a83ab3ac7fb6bccd14f63f6b3ce5c\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-a-f605b45a38" Jun 25 14:54:04.100979 kubelet[2483]: I0625 14:54:04.100968 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/974a83ab3ac7fb6bccd14f63f6b3ce5c-kubeconfig\") pod \"kube-controller-manager-ci-3815.2.4-a-f605b45a38\" (UID: \"974a83ab3ac7fb6bccd14f63f6b3ce5c\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-a-f605b45a38" Jun 25 14:54:04.101117 kubelet[2483]: I0625 14:54:04.101108 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5d6dfbc432e8332f9f9569b1d8ef77a3-ca-certs\") pod \"kube-apiserver-ci-3815.2.4-a-f605b45a38\" (UID: \"5d6dfbc432e8332f9f9569b1d8ef77a3\") " pod="kube-system/kube-apiserver-ci-3815.2.4-a-f605b45a38" Jun 25 14:54:04.101228 kubelet[2483]: I0625 14:54:04.101219 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5d6dfbc432e8332f9f9569b1d8ef77a3-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3815.2.4-a-f605b45a38\" (UID: \"5d6dfbc432e8332f9f9569b1d8ef77a3\") " pod="kube-system/kube-apiserver-ci-3815.2.4-a-f605b45a38" Jun 25 14:54:04.102274 kubelet[2483]: I0625 14:54:04.102252 2483 kubelet_node_status.go:70] "Attempting to register node" node="ci-3815.2.4-a-f605b45a38" Jun 25 14:54:04.102625 kubelet[2483]: E0625 14:54:04.102609 2483 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.34:6443/api/v1/nodes\": dial tcp 10.200.20.34:6443: connect: connection refused" node="ci-3815.2.4-a-f605b45a38" Jun 25 14:54:04.268928 containerd[1492]: time="2024-06-25T14:54:04.268869747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3815.2.4-a-f605b45a38,Uid:5d6dfbc432e8332f9f9569b1d8ef77a3,Namespace:kube-system,Attempt:0,}" Jun 25 14:54:04.281172 containerd[1492]: time="2024-06-25T14:54:04.281129584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3815.2.4-a-f605b45a38,Uid:974a83ab3ac7fb6bccd14f63f6b3ce5c,Namespace:kube-system,Attempt:0,}" Jun 25 14:54:04.285904 containerd[1492]: time="2024-06-25T14:54:04.285857670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3815.2.4-a-f605b45a38,Uid:f86b874b003ea5b31ff48bbf2cb73c0a,Namespace:kube-system,Attempt:0,}" Jun 25 14:54:04.402105 kubelet[2483]: E0625 14:54:04.402012 2483 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3815.2.4-a-f605b45a38?timeout=10s\": dial tcp 10.200.20.34:6443: connect: connection refused" interval="800ms" Jun 25 14:54:04.504793 kubelet[2483]: I0625 14:54:04.504753 2483 kubelet_node_status.go:70] "Attempting to register node" node="ci-3815.2.4-a-f605b45a38" Jun 25 14:54:04.505095 kubelet[2483]: E0625 14:54:04.505072 2483 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.34:6443/api/v1/nodes\": dial tcp 10.200.20.34:6443: connect: connection refused" node="ci-3815.2.4-a-f605b45a38" Jun 25 14:54:04.602991 kubelet[2483]: W0625 14:54:04.602939 2483 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.20.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Jun 25 14:54:04.603148 kubelet[2483]: E0625 14:54:04.603136 2483 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Jun 25 14:54:04.857027 kubelet[2483]: W0625 14:54:04.856977 2483 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.20.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3815.2.4-a-f605b45a38&limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Jun 25 14:54:04.857385 kubelet[2483]: E0625 14:54:04.857371 2483 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3815.2.4-a-f605b45a38&limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Jun 25 14:54:04.989035 kubelet[2483]: W0625 14:54:04.988969 2483 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.20.34:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Jun 25 14:54:04.989035 kubelet[2483]: E0625 14:54:04.989037 2483 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.34:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Jun 25 14:54:05.203460 kubelet[2483]: E0625 14:54:05.203365 2483 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3815.2.4-a-f605b45a38?timeout=10s\": dial tcp 10.200.20.34:6443: connect: connection refused" interval="1.6s" Jun 25 14:54:05.259222 kubelet[2483]: W0625 14:54:05.259153 2483 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.20.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Jun 25 14:54:05.259222 kubelet[2483]: E0625 14:54:05.259226 2483 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Jun 25 14:54:05.307258 kubelet[2483]: I0625 14:54:05.307216 2483 kubelet_node_status.go:70] "Attempting to register node" node="ci-3815.2.4-a-f605b45a38" Jun 25 14:54:05.307582 kubelet[2483]: E0625 14:54:05.307568 2483 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.34:6443/api/v1/nodes\": dial tcp 10.200.20.34:6443: connect: connection refused" node="ci-3815.2.4-a-f605b45a38" Jun 25 14:54:05.894311 kubelet[2483]: E0625 14:54:05.894269 2483 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.34:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.34:6443: connect: connection refused Jun 25 14:54:06.153770 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2778151052.mount: Deactivated successfully. Jun 25 14:54:06.176914 containerd[1492]: time="2024-06-25T14:54:06.176857411Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:54:06.189396 containerd[1492]: time="2024-06-25T14:54:06.189352305Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jun 25 14:54:06.193008 containerd[1492]: time="2024-06-25T14:54:06.192969683Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:54:06.195187 containerd[1492]: time="2024-06-25T14:54:06.195151590Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 14:54:06.199034 containerd[1492]: time="2024-06-25T14:54:06.198998099Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:54:06.204984 containerd[1492]: time="2024-06-25T14:54:06.204935391Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:54:06.211165 containerd[1492]: time="2024-06-25T14:54:06.211118735Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:54:06.213639 containerd[1492]: time="2024-06-25T14:54:06.213600257Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 14:54:06.217027 containerd[1492]: time="2024-06-25T14:54:06.216991224Z" level=info msg="ImageUpdate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:54:06.220854 containerd[1492]: time="2024-06-25T14:54:06.220816132Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:54:06.222293 containerd[1492]: time="2024-06-25T14:54:06.222252162Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 1.95327217s" Jun 25 14:54:06.224659 containerd[1492]: time="2024-06-25T14:54:06.224617878Z" level=info msg="ImageUpdate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:54:06.235301 containerd[1492]: time="2024-06-25T14:54:06.235257961Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:54:06.244153 containerd[1492]: time="2024-06-25T14:54:06.244108156Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:54:06.247516 containerd[1492]: time="2024-06-25T14:54:06.247477922Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:54:06.248902 containerd[1492]: time="2024-06-25T14:54:06.248870271Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 1.962928476s" Jun 25 14:54:06.273021 containerd[1492]: time="2024-06-25T14:54:06.272972935Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:54:06.273958 containerd[1492]: time="2024-06-25T14:54:06.273925702Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 1.992687793s" Jun 25 14:54:06.803913 kubelet[2483]: E0625 14:54:06.803871 2483 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3815.2.4-a-f605b45a38?timeout=10s\": dial tcp 10.200.20.34:6443: connect: connection refused" interval="3.2s" Jun 25 14:54:06.898217 containerd[1492]: time="2024-06-25T14:54:06.898103345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:54:06.898217 containerd[1492]: time="2024-06-25T14:54:06.898170308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:54:06.898478 containerd[1492]: time="2024-06-25T14:54:06.898423601Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:54:06.898478 containerd[1492]: time="2024-06-25T14:54:06.898448082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:54:06.898645 containerd[1492]: time="2024-06-25T14:54:06.898588449Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:54:06.898725 containerd[1492]: time="2024-06-25T14:54:06.898688774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:54:06.898764 containerd[1492]: time="2024-06-25T14:54:06.898737496Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:54:06.898818 containerd[1492]: time="2024-06-25T14:54:06.898768378Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:54:06.906385 containerd[1492]: time="2024-06-25T14:54:06.906135180Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:54:06.906385 containerd[1492]: time="2024-06-25T14:54:06.906194623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:54:06.906385 containerd[1492]: time="2024-06-25T14:54:06.906213704Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:54:06.906385 containerd[1492]: time="2024-06-25T14:54:06.906227504Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:54:06.909820 kubelet[2483]: I0625 14:54:06.909657 2483 kubelet_node_status.go:70] "Attempting to register node" node="ci-3815.2.4-a-f605b45a38" Jun 25 14:54:06.910089 kubelet[2483]: E0625 14:54:06.909997 2483 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.34:6443/api/v1/nodes\": dial tcp 10.200.20.34:6443: connect: connection refused" node="ci-3815.2.4-a-f605b45a38" Jun 25 14:54:06.918973 systemd[1]: Started cri-containerd-2e8797e4d2e535e2367a8f556cf0faab3f4fcc848da18945093f2fc3b70e55e0.scope - libcontainer container 2e8797e4d2e535e2367a8f556cf0faab3f4fcc848da18945093f2fc3b70e55e0. Jun 25 14:54:06.921654 systemd[1]: Started cri-containerd-13bb7ea04183fdae95429b083ac22de06d7774654f075bbf5e93942273520350.scope - libcontainer container 13bb7ea04183fdae95429b083ac22de06d7774654f075bbf5e93942273520350. Jun 25 14:54:06.929000 audit: BPF prog-id=83 op=LOAD Jun 25 14:54:06.934830 kubelet[2483]: W0625 14:54:06.934744 2483 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.20.34:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Jun 25 14:54:06.934830 kubelet[2483]: E0625 14:54:06.934808 2483 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.34:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Jun 25 14:54:06.935585 kernel: kauditd_printk_skb: 58 callbacks suppressed Jun 25 14:54:06.935653 kernel: audit: type=1334 audit(1719327246.929:294): prog-id=83 op=LOAD Jun 25 14:54:06.940000 audit: BPF prog-id=84 op=LOAD Jun 25 14:54:06.947279 kernel: audit: type=1334 audit(1719327246.940:295): prog-id=84 op=LOAD Jun 25 14:54:06.941000 audit: BPF prog-id=85 op=LOAD Jun 25 14:54:06.952551 kernel: audit: type=1334 audit(1719327246.941:296): prog-id=85 op=LOAD Jun 25 14:54:06.941000 audit[2572]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400010d8b0 a2=78 a3=0 items=0 ppid=2545 pid=2572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:06.973960 kernel: audit: type=1300 audit(1719327246.941:296): arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400010d8b0 a2=78 a3=0 items=0 ppid=2545 pid=2572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:06.941000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133626237656130343138336664616539353432396230383361633232 Jun 25 14:54:06.995320 kernel: audit: type=1327 audit(1719327246.941:296): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133626237656130343138336664616539353432396230383361633232 Jun 25 14:54:06.997914 kernel: audit: type=1334 audit(1719327246.941:297): prog-id=86 op=LOAD Jun 25 14:54:06.941000 audit: BPF prog-id=86 op=LOAD Jun 25 14:54:07.023977 kernel: audit: type=1300 audit(1719327246.941:297): arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400010d640 a2=78 a3=0 items=0 ppid=2545 pid=2572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:06.941000 audit[2572]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400010d640 a2=78 a3=0 items=0 ppid=2545 pid=2572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:06.941000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133626237656130343138336664616539353432396230383361633232 Jun 25 14:54:07.049368 kernel: audit: type=1327 audit(1719327246.941:297): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133626237656130343138336664616539353432396230383361633232 Jun 25 14:54:06.946000 audit: BPF prog-id=86 op=UNLOAD Jun 25 14:54:07.052175 systemd[1]: Started cri-containerd-1767c13e3bf110fa4736d783433cd5d7a9f4746ff893d71fd070e8e2b673488a.scope - libcontainer container 1767c13e3bf110fa4736d783433cd5d7a9f4746ff893d71fd070e8e2b673488a. Jun 25 14:54:07.057356 kernel: audit: type=1334 audit(1719327246.946:298): prog-id=86 op=UNLOAD Jun 25 14:54:07.058952 kubelet[2483]: W0625 14:54:07.058910 2483 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.20.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Jun 25 14:54:07.058952 kubelet[2483]: E0625 14:54:07.058949 2483 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Jun 25 14:54:06.946000 audit: BPF prog-id=85 op=UNLOAD Jun 25 14:54:07.064603 kernel: audit: type=1334 audit(1719327246.946:299): prog-id=85 op=UNLOAD Jun 25 14:54:07.070201 containerd[1492]: time="2024-06-25T14:54:07.070161553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3815.2.4-a-f605b45a38,Uid:974a83ab3ac7fb6bccd14f63f6b3ce5c,Namespace:kube-system,Attempt:0,} returns sandbox id \"13bb7ea04183fdae95429b083ac22de06d7774654f075bbf5e93942273520350\"" Jun 25 14:54:06.941000 audit: BPF prog-id=87 op=LOAD Jun 25 14:54:06.941000 audit[2567]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001a98b0 a2=78 a3=0 items=0 ppid=2544 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:06.941000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265383739376534643265353335653233363761386635353663663066 Jun 25 14:54:06.946000 audit: BPF prog-id=88 op=LOAD Jun 25 14:54:06.946000 audit[2572]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400010db10 a2=78 a3=0 items=0 ppid=2545 pid=2572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:06.946000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133626237656130343138336664616539353432396230383361633232 Jun 25 14:54:06.951000 audit: BPF prog-id=89 op=LOAD Jun 25 14:54:06.951000 audit[2567]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=40001a9640 a2=78 a3=0 items=0 ppid=2544 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:06.951000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265383739376534643265353335653233363761386635353663663066 Jun 25 14:54:06.952000 audit: BPF prog-id=89 op=UNLOAD Jun 25 14:54:06.952000 audit: BPF prog-id=87 op=UNLOAD Jun 25 14:54:06.952000 audit: BPF prog-id=90 op=LOAD Jun 25 14:54:06.952000 audit[2567]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001a9b10 a2=78 a3=0 items=0 ppid=2544 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:06.952000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265383739376534643265353335653233363761386635353663663066 Jun 25 14:54:07.080906 containerd[1492]: time="2024-06-25T14:54:07.079321671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3815.2.4-a-f605b45a38,Uid:f86b874b003ea5b31ff48bbf2cb73c0a,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e8797e4d2e535e2367a8f556cf0faab3f4fcc848da18945093f2fc3b70e55e0\"" Jun 25 14:54:07.081513 containerd[1492]: time="2024-06-25T14:54:07.081481614Z" level=info msg="CreateContainer within sandbox \"13bb7ea04183fdae95429b083ac22de06d7774654f075bbf5e93942273520350\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 25 14:54:07.083000 audit: BPF prog-id=91 op=LOAD Jun 25 14:54:07.084000 audit: BPF prog-id=92 op=LOAD Jun 25 14:54:07.084000 audit[2588]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001318b0 a2=78 a3=0 items=0 ppid=2546 pid=2588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:07.084000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137363763313365336266313130666134373336643738333433336364 Jun 25 14:54:07.084000 audit: BPF prog-id=93 op=LOAD Jun 25 14:54:07.084000 audit[2588]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000131640 a2=78 a3=0 items=0 ppid=2546 pid=2588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:07.084000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137363763313365336266313130666134373336643738333433336364 Jun 25 14:54:07.084000 audit: BPF prog-id=93 op=UNLOAD Jun 25 14:54:07.084000 audit: BPF prog-id=92 op=UNLOAD Jun 25 14:54:07.084000 audit: BPF prog-id=94 op=LOAD Jun 25 14:54:07.084000 audit[2588]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000131b10 a2=78 a3=0 items=0 ppid=2546 pid=2588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:07.084000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137363763313365336266313130666134373336643738333433336364 Jun 25 14:54:07.087085 containerd[1492]: time="2024-06-25T14:54:07.087030879Z" level=info msg="CreateContainer within sandbox \"2e8797e4d2e535e2367a8f556cf0faab3f4fcc848da18945093f2fc3b70e55e0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 25 14:54:07.105687 containerd[1492]: time="2024-06-25T14:54:07.105630769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3815.2.4-a-f605b45a38,Uid:5d6dfbc432e8332f9f9569b1d8ef77a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"1767c13e3bf110fa4736d783433cd5d7a9f4746ff893d71fd070e8e2b673488a\"" Jun 25 14:54:07.109650 containerd[1492]: time="2024-06-25T14:54:07.109606479Z" level=info msg="CreateContainer within sandbox \"1767c13e3bf110fa4736d783433cd5d7a9f4746ff893d71fd070e8e2b673488a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 25 14:54:07.150761 containerd[1492]: time="2024-06-25T14:54:07.150704644Z" level=info msg="CreateContainer within sandbox \"2e8797e4d2e535e2367a8f556cf0faab3f4fcc848da18945093f2fc3b70e55e0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"179f03e2b2c4d4cd592efabdc3aaa139a16fc1c057273043f9592c0432f05da1\"" Jun 25 14:54:07.151712 containerd[1492]: time="2024-06-25T14:54:07.151682251Z" level=info msg="StartContainer for \"179f03e2b2c4d4cd592efabdc3aaa139a16fc1c057273043f9592c0432f05da1\"" Jun 25 14:54:07.163273 containerd[1492]: time="2024-06-25T14:54:07.162743260Z" level=info msg="CreateContainer within sandbox \"13bb7ea04183fdae95429b083ac22de06d7774654f075bbf5e93942273520350\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4b0f5379c4e65bd7d0ac661ef0577516658aba5436deaa765a0d69ac04b20674\"" Jun 25 14:54:07.164237 containerd[1492]: time="2024-06-25T14:54:07.164196890Z" level=info msg="StartContainer for \"4b0f5379c4e65bd7d0ac661ef0577516658aba5436deaa765a0d69ac04b20674\"" Jun 25 14:54:07.187275 systemd[1]: Started cri-containerd-179f03e2b2c4d4cd592efabdc3aaa139a16fc1c057273043f9592c0432f05da1.scope - libcontainer container 179f03e2b2c4d4cd592efabdc3aaa139a16fc1c057273043f9592c0432f05da1. Jun 25 14:54:07.195000 audit: BPF prog-id=95 op=LOAD Jun 25 14:54:07.196000 audit: BPF prog-id=96 op=LOAD Jun 25 14:54:07.196000 audit[2657]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001318b0 a2=78 a3=0 items=0 ppid=2544 pid=2657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:07.196000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137396630336532623263346434636435393265666162646333616161 Jun 25 14:54:07.196000 audit: BPF prog-id=97 op=LOAD Jun 25 14:54:07.196000 audit[2657]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000131640 a2=78 a3=0 items=0 ppid=2544 pid=2657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:07.196000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137396630336532623263346434636435393265666162646333616161 Jun 25 14:54:07.196000 audit: BPF prog-id=97 op=UNLOAD Jun 25 14:54:07.196000 audit: BPF prog-id=96 op=UNLOAD Jun 25 14:54:07.196000 audit: BPF prog-id=98 op=LOAD Jun 25 14:54:07.196000 audit[2657]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000131b10 a2=78 a3=0 items=0 ppid=2544 pid=2657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:07.196000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137396630336532623263346434636435393265666162646333616161 Jun 25 14:54:07.208075 containerd[1492]: time="2024-06-25T14:54:07.208014025Z" level=info msg="CreateContainer within sandbox \"1767c13e3bf110fa4736d783433cd5d7a9f4746ff893d71fd070e8e2b673488a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4973d26c5ab5bdc2a5430559466a5bd0e7f1b63ea499f214119a1fe7672bf607\"" Jun 25 14:54:07.211271 containerd[1492]: time="2024-06-25T14:54:07.209418132Z" level=info msg="StartContainer for \"4973d26c5ab5bdc2a5430559466a5bd0e7f1b63ea499f214119a1fe7672bf607\"" Jun 25 14:54:07.210958 systemd[1]: Started cri-containerd-4b0f5379c4e65bd7d0ac661ef0577516658aba5436deaa765a0d69ac04b20674.scope - libcontainer container 4b0f5379c4e65bd7d0ac661ef0577516658aba5436deaa765a0d69ac04b20674. Jun 25 14:54:07.224000 audit: BPF prog-id=99 op=LOAD Jun 25 14:54:07.225000 audit: BPF prog-id=100 op=LOAD Jun 25 14:54:07.225000 audit[2683]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001338b0 a2=78 a3=0 items=0 ppid=2545 pid=2683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:07.225000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462306635333739633465363562643764306163363631656630353737 Jun 25 14:54:07.225000 audit: BPF prog-id=101 op=LOAD Jun 25 14:54:07.225000 audit[2683]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000133640 a2=78 a3=0 items=0 ppid=2545 pid=2683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:07.225000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462306635333739633465363562643764306163363631656630353737 Jun 25 14:54:07.225000 audit: BPF prog-id=101 op=UNLOAD Jun 25 14:54:07.225000 audit: BPF prog-id=100 op=UNLOAD Jun 25 14:54:07.225000 audit: BPF prog-id=102 op=LOAD Jun 25 14:54:07.225000 audit[2683]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000133b10 a2=78 a3=0 items=0 ppid=2545 pid=2683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:07.225000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462306635333739633465363562643764306163363631656630353737 Jun 25 14:54:07.234065 containerd[1492]: time="2024-06-25T14:54:07.234022989Z" level=info msg="StartContainer for \"179f03e2b2c4d4cd592efabdc3aaa139a16fc1c057273043f9592c0432f05da1\" returns successfully" Jun 25 14:54:07.253968 systemd[1]: Started cri-containerd-4973d26c5ab5bdc2a5430559466a5bd0e7f1b63ea499f214119a1fe7672bf607.scope - libcontainer container 4973d26c5ab5bdc2a5430559466a5bd0e7f1b63ea499f214119a1fe7672bf607. Jun 25 14:54:07.269085 containerd[1492]: time="2024-06-25T14:54:07.269022823Z" level=info msg="StartContainer for \"4b0f5379c4e65bd7d0ac661ef0577516658aba5436deaa765a0d69ac04b20674\" returns successfully" Jun 25 14:54:07.273000 audit: BPF prog-id=103 op=LOAD Jun 25 14:54:07.274000 audit: BPF prog-id=104 op=LOAD Jun 25 14:54:07.274000 audit[2717]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400010d8b0 a2=78 a3=0 items=0 ppid=2546 pid=2717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:07.274000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439373364323663356162356264633261353433303535393436366135 Jun 25 14:54:07.274000 audit: BPF prog-id=105 op=LOAD Jun 25 14:54:07.274000 audit[2717]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400010d640 a2=78 a3=0 items=0 ppid=2546 pid=2717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:07.274000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439373364323663356162356264633261353433303535393436366135 Jun 25 14:54:07.274000 audit: BPF prog-id=105 op=UNLOAD Jun 25 14:54:07.274000 audit: BPF prog-id=104 op=UNLOAD Jun 25 14:54:07.274000 audit: BPF prog-id=106 op=LOAD Jun 25 14:54:07.274000 audit[2717]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400010db10 a2=78 a3=0 items=0 ppid=2546 pid=2717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:07.274000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439373364323663356162356264633261353433303535393436366135 Jun 25 14:54:07.310213 containerd[1492]: time="2024-06-25T14:54:07.310086466Z" level=info msg="StartContainer for \"4973d26c5ab5bdc2a5430559466a5bd0e7f1b63ea499f214119a1fe7672bf607\" returns successfully" Jun 25 14:54:07.418577 kubelet[2483]: W0625 14:54:07.418496 2483 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.20.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Jun 25 14:54:07.418577 kubelet[2483]: E0625 14:54:07.418550 2483 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Jun 25 14:54:08.148375 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3657593754.mount: Deactivated successfully. Jun 25 14:54:09.981000 audit[2733]: AVC avc: denied { watch } for pid=2733 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=3646740 scontext=system_u:system_r:container_t:s0:c134,c352 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:54:09.981000 audit[2733]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=40 a1=40076929c0 a2=fc6 a3=0 items=0 ppid=2546 pid=2733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c134,c352 key=(null) Jun 25 14:54:09.981000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E32302E3334002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 14:54:09.981000 audit[2733]: AVC avc: denied { watch } for pid=2733 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c134,c352 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:54:09.981000 audit[2733]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=40 a1=40049a0160 a2=fc6 a3=0 items=0 ppid=2546 pid=2733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c134,c352 key=(null) Jun 25 14:54:09.981000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E32302E3334002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 14:54:09.982000 audit[2733]: AVC avc: denied { watch } for pid=2733 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=3646736 scontext=system_u:system_r:container_t:s0:c134,c352 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:54:09.982000 audit[2733]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=40 a1=4007692b40 a2=fc6 a3=0 items=0 ppid=2546 pid=2733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c134,c352 key=(null) Jun 25 14:54:09.982000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E32302E3334002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 14:54:09.987000 audit[2733]: AVC avc: denied { watch } for pid=2733 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=3646742 scontext=system_u:system_r:container_t:s0:c134,c352 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:54:09.987000 audit[2733]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=46 a1=4004ceef00 a2=fc6 a3=0 items=0 ppid=2546 pid=2733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c134,c352 key=(null) Jun 25 14:54:09.987000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E32302E3334002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 14:54:10.019000 audit[2733]: AVC avc: denied { watch } for pid=2733 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c134,c352 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:54:10.019000 audit[2733]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=49 a1=400469fd60 a2=fc6 a3=0 items=0 ppid=2546 pid=2733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c134,c352 key=(null) Jun 25 14:54:10.019000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E32302E3334002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 14:54:10.020000 audit[2733]: AVC avc: denied { watch } for pid=2733 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=3646740 scontext=system_u:system_r:container_t:s0:c134,c352 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:54:10.020000 audit[2733]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=49 a1=4003074420 a2=fc6 a3=0 items=0 ppid=2546 pid=2733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c134,c352 key=(null) Jun 25 14:54:10.020000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E32302E3334002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 14:54:10.111680 kubelet[2483]: I0625 14:54:10.111653 2483 kubelet_node_status.go:70] "Attempting to register node" node="ci-3815.2.4-a-f605b45a38" Jun 25 14:54:10.282669 kubelet[2483]: I0625 14:54:10.282625 2483 kubelet_node_status.go:73] "Successfully registered node" node="ci-3815.2.4-a-f605b45a38" Jun 25 14:54:10.484525 kubelet[2483]: E0625 14:54:10.484485 2483 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="6.4s" Jun 25 14:54:10.730000 audit[2694]: AVC avc: denied { watch } for pid=2694 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=3646740 scontext=system_u:system_r:container_t:s0:c10,c345 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:54:10.730000 audit[2694]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=8 a1=400078f140 a2=fc6 a3=0 items=0 ppid=2545 pid=2694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c10,c345 key=(null) Jun 25 14:54:10.730000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:54:10.730000 audit[2694]: AVC avc: denied { watch } for pid=2694 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c10,c345 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:54:10.730000 audit[2694]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=8 a1=40008848a0 a2=fc6 a3=0 items=0 ppid=2545 pid=2694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c10,c345 key=(null) Jun 25 14:54:10.730000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:54:10.791927 kubelet[2483]: I0625 14:54:10.791901 2483 apiserver.go:52] "Watching apiserver" Jun 25 14:54:10.798970 kubelet[2483]: I0625 14:54:10.798923 2483 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 14:54:13.003771 systemd[1]: Reloading. Jun 25 14:54:13.170485 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 14:54:13.256605 kernel: kauditd_printk_skb: 86 callbacks suppressed Jun 25 14:54:13.256708 kernel: audit: type=1334 audit(1719327253.246:338): prog-id=107 op=LOAD Jun 25 14:54:13.246000 audit: BPF prog-id=107 op=LOAD Jun 25 14:54:13.266313 kernel: audit: type=1334 audit(1719327253.246:339): prog-id=69 op=UNLOAD Jun 25 14:54:13.246000 audit: BPF prog-id=69 op=UNLOAD Jun 25 14:54:13.250000 audit: BPF prog-id=108 op=LOAD Jun 25 14:54:13.272632 kernel: audit: type=1334 audit(1719327253.250:340): prog-id=108 op=LOAD Jun 25 14:54:13.250000 audit: BPF prog-id=109 op=LOAD Jun 25 14:54:13.250000 audit: BPF prog-id=70 op=UNLOAD Jun 25 14:54:13.283318 kernel: audit: type=1334 audit(1719327253.250:341): prog-id=109 op=LOAD Jun 25 14:54:13.283370 kernel: audit: type=1334 audit(1719327253.250:342): prog-id=70 op=UNLOAD Jun 25 14:54:13.250000 audit: BPF prog-id=71 op=UNLOAD Jun 25 14:54:13.288819 kernel: audit: type=1334 audit(1719327253.250:343): prog-id=71 op=UNLOAD Jun 25 14:54:13.255000 audit: BPF prog-id=110 op=LOAD Jun 25 14:54:13.294224 kernel: audit: type=1334 audit(1719327253.255:344): prog-id=110 op=LOAD Jun 25 14:54:13.255000 audit: BPF prog-id=111 op=LOAD Jun 25 14:54:13.305824 kernel: audit: type=1334 audit(1719327253.255:345): prog-id=111 op=LOAD Jun 25 14:54:13.255000 audit: BPF prog-id=72 op=UNLOAD Jun 25 14:54:13.307277 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:54:13.310741 kernel: audit: type=1334 audit(1719327253.255:346): prog-id=72 op=UNLOAD Jun 25 14:54:13.311206 kubelet[2483]: I0625 14:54:13.311174 2483 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 14:54:13.255000 audit: BPF prog-id=73 op=UNLOAD Jun 25 14:54:13.316547 kernel: audit: type=1334 audit(1719327253.255:347): prog-id=73 op=UNLOAD Jun 25 14:54:13.256000 audit: BPF prog-id=112 op=LOAD Jun 25 14:54:13.256000 audit: BPF prog-id=103 op=UNLOAD Jun 25 14:54:13.259000 audit: BPF prog-id=113 op=LOAD Jun 25 14:54:13.259000 audit: BPF prog-id=99 op=UNLOAD Jun 25 14:54:13.264000 audit: BPF prog-id=114 op=LOAD Jun 25 14:54:13.264000 audit: BPF prog-id=84 op=UNLOAD Jun 25 14:54:13.265000 audit: BPF prog-id=115 op=LOAD Jun 25 14:54:13.265000 audit: BPF prog-id=74 op=UNLOAD Jun 25 14:54:13.265000 audit: BPF prog-id=116 op=LOAD Jun 25 14:54:13.265000 audit: BPF prog-id=117 op=LOAD Jun 25 14:54:13.265000 audit: BPF prog-id=75 op=UNLOAD Jun 25 14:54:13.265000 audit: BPF prog-id=76 op=UNLOAD Jun 25 14:54:13.266000 audit: BPF prog-id=118 op=LOAD Jun 25 14:54:13.266000 audit: BPF prog-id=83 op=UNLOAD Jun 25 14:54:13.271000 audit: BPF prog-id=119 op=LOAD Jun 25 14:54:13.271000 audit: BPF prog-id=77 op=UNLOAD Jun 25 14:54:13.271000 audit: BPF prog-id=120 op=LOAD Jun 25 14:54:13.271000 audit: BPF prog-id=78 op=UNLOAD Jun 25 14:54:13.271000 audit: BPF prog-id=121 op=LOAD Jun 25 14:54:13.271000 audit: BPF prog-id=122 op=LOAD Jun 25 14:54:13.271000 audit: BPF prog-id=79 op=UNLOAD Jun 25 14:54:13.271000 audit: BPF prog-id=80 op=UNLOAD Jun 25 14:54:13.276000 audit: BPF prog-id=123 op=LOAD Jun 25 14:54:13.276000 audit: BPF prog-id=81 op=UNLOAD Jun 25 14:54:13.277000 audit: BPF prog-id=124 op=LOAD Jun 25 14:54:13.277000 audit: BPF prog-id=91 op=UNLOAD Jun 25 14:54:13.282000 audit: BPF prog-id=125 op=LOAD Jun 25 14:54:13.282000 audit: BPF prog-id=82 op=UNLOAD Jun 25 14:54:13.287000 audit: BPF prog-id=126 op=LOAD Jun 25 14:54:13.287000 audit: BPF prog-id=95 op=UNLOAD Jun 25 14:54:13.334182 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 14:54:13.334410 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:54:13.334469 systemd[1]: kubelet.service: Consumed 1.643s CPU time. Jun 25 14:54:13.333000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:13.340204 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:54:13.425757 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:54:13.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:13.476488 kubelet[2843]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 14:54:13.476488 kubelet[2843]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 14:54:13.476488 kubelet[2843]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 14:54:13.476894 kubelet[2843]: I0625 14:54:13.476533 2843 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 14:54:13.480459 kubelet[2843]: I0625 14:54:13.480436 2843 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jun 25 14:54:13.480593 kubelet[2843]: I0625 14:54:13.480583 2843 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 14:54:13.480871 kubelet[2843]: I0625 14:54:13.480857 2843 server.go:895] "Client rotation is on, will bootstrap in background" Jun 25 14:54:13.482436 kubelet[2843]: I0625 14:54:13.482419 2843 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 25 14:54:13.483537 kubelet[2843]: I0625 14:54:13.483522 2843 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 14:54:13.490720 kubelet[2843]: W0625 14:54:13.490701 2843 machine.go:65] Cannot read vendor id correctly, set empty. Jun 25 14:54:13.491465 kubelet[2843]: I0625 14:54:13.491438 2843 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 14:54:13.491772 kubelet[2843]: I0625 14:54:13.491758 2843 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 14:54:13.492047 kubelet[2843]: I0625 14:54:13.492025 2843 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 14:54:13.492186 kubelet[2843]: I0625 14:54:13.492174 2843 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 14:54:13.492246 kubelet[2843]: I0625 14:54:13.492236 2843 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 14:54:13.492336 kubelet[2843]: I0625 14:54:13.492327 2843 state_mem.go:36] "Initialized new in-memory state store" Jun 25 14:54:13.492482 kubelet[2843]: I0625 14:54:13.492472 2843 kubelet.go:393] "Attempting to sync node with API server" Jun 25 14:54:13.493130 kubelet[2843]: I0625 14:54:13.493078 2843 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 14:54:13.493269 kubelet[2843]: I0625 14:54:13.493256 2843 kubelet.go:309] "Adding apiserver pod source" Jun 25 14:54:13.493353 kubelet[2843]: I0625 14:54:13.493343 2843 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 14:54:13.498077 kubelet[2843]: I0625 14:54:13.497944 2843 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.13" apiVersion="v1" Jun 25 14:54:13.498454 kubelet[2843]: I0625 14:54:13.498426 2843 server.go:1232] "Started kubelet" Jun 25 14:54:13.505183 kubelet[2843]: I0625 14:54:13.505157 2843 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 14:54:13.508731 kubelet[2843]: I0625 14:54:13.508652 2843 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 14:54:13.510978 kubelet[2843]: I0625 14:54:13.510959 2843 server.go:462] "Adding debug handlers to kubelet server" Jun 25 14:54:13.511925 kubelet[2843]: I0625 14:54:13.511905 2843 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jun 25 14:54:13.512077 kubelet[2843]: I0625 14:54:13.512059 2843 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 14:54:13.519037 kubelet[2843]: I0625 14:54:13.519013 2843 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 14:54:13.519458 kubelet[2843]: I0625 14:54:13.519438 2843 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 14:54:13.520131 kubelet[2843]: I0625 14:54:13.520115 2843 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 14:54:13.521972 kubelet[2843]: E0625 14:54:13.521939 2843 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jun 25 14:54:13.522151 kubelet[2843]: E0625 14:54:13.522139 2843 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 14:54:13.532925 kubelet[2843]: I0625 14:54:13.532892 2843 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 14:54:13.533735 kubelet[2843]: I0625 14:54:13.533699 2843 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 14:54:13.533735 kubelet[2843]: I0625 14:54:13.533723 2843 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 14:54:13.533735 kubelet[2843]: I0625 14:54:13.533742 2843 kubelet.go:2303] "Starting kubelet main sync loop" Jun 25 14:54:13.533889 kubelet[2843]: E0625 14:54:13.533805 2843 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 14:54:13.583869 kubelet[2843]: I0625 14:54:13.583758 2843 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 14:54:13.583869 kubelet[2843]: I0625 14:54:13.583816 2843 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 14:54:13.583869 kubelet[2843]: I0625 14:54:13.583835 2843 state_mem.go:36] "Initialized new in-memory state store" Jun 25 14:54:13.584054 kubelet[2843]: I0625 14:54:13.584020 2843 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 25 14:54:13.584054 kubelet[2843]: I0625 14:54:13.584042 2843 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 25 14:54:13.584054 kubelet[2843]: I0625 14:54:13.584049 2843 policy_none.go:49] "None policy: Start" Jun 25 14:54:13.584719 kubelet[2843]: I0625 14:54:13.584696 2843 memory_manager.go:169] "Starting memorymanager" policy="None" Jun 25 14:54:13.584774 kubelet[2843]: I0625 14:54:13.584726 2843 state_mem.go:35] "Initializing new in-memory state store" Jun 25 14:54:13.584970 kubelet[2843]: I0625 14:54:13.584943 2843 state_mem.go:75] "Updated machine memory state" Jun 25 14:54:13.588771 kubelet[2843]: I0625 14:54:13.588755 2843 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 14:54:13.589130 kubelet[2843]: I0625 14:54:13.589112 2843 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 14:54:13.622300 kubelet[2843]: I0625 14:54:13.622272 2843 kubelet_node_status.go:70] "Attempting to register node" node="ci-3815.2.4-a-f605b45a38" Jun 25 14:54:13.631112 kubelet[2843]: I0625 14:54:13.631009 2843 kubelet_node_status.go:108] "Node was previously registered" node="ci-3815.2.4-a-f605b45a38" Jun 25 14:54:13.631730 kubelet[2843]: I0625 14:54:13.631633 2843 kubelet_node_status.go:73] "Successfully registered node" node="ci-3815.2.4-a-f605b45a38" Jun 25 14:54:13.634172 kubelet[2843]: I0625 14:54:13.634126 2843 topology_manager.go:215] "Topology Admit Handler" podUID="5d6dfbc432e8332f9f9569b1d8ef77a3" podNamespace="kube-system" podName="kube-apiserver-ci-3815.2.4-a-f605b45a38" Jun 25 14:54:13.634296 kubelet[2843]: I0625 14:54:13.634229 2843 topology_manager.go:215] "Topology Admit Handler" podUID="974a83ab3ac7fb6bccd14f63f6b3ce5c" podNamespace="kube-system" podName="kube-controller-manager-ci-3815.2.4-a-f605b45a38" Jun 25 14:54:13.634296 kubelet[2843]: I0625 14:54:13.634266 2843 topology_manager.go:215] "Topology Admit Handler" podUID="f86b874b003ea5b31ff48bbf2cb73c0a" podNamespace="kube-system" podName="kube-scheduler-ci-3815.2.4-a-f605b45a38" Jun 25 14:54:13.642991 kubelet[2843]: W0625 14:54:13.642938 2843 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 14:54:13.647297 kubelet[2843]: W0625 14:54:13.647260 2843 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 14:54:13.924209 kubelet[2843]: I0625 14:54:13.924176 2843 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/974a83ab3ac7fb6bccd14f63f6b3ce5c-ca-certs\") pod \"kube-controller-manager-ci-3815.2.4-a-f605b45a38\" (UID: \"974a83ab3ac7fb6bccd14f63f6b3ce5c\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-a-f605b45a38" Jun 25 14:54:13.924403 kubelet[2843]: I0625 14:54:13.924391 2843 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/974a83ab3ac7fb6bccd14f63f6b3ce5c-kubeconfig\") pod \"kube-controller-manager-ci-3815.2.4-a-f605b45a38\" (UID: \"974a83ab3ac7fb6bccd14f63f6b3ce5c\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-a-f605b45a38" Jun 25 14:54:13.924478 kubelet[2843]: I0625 14:54:13.924468 2843 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f86b874b003ea5b31ff48bbf2cb73c0a-kubeconfig\") pod \"kube-scheduler-ci-3815.2.4-a-f605b45a38\" (UID: \"f86b874b003ea5b31ff48bbf2cb73c0a\") " pod="kube-system/kube-scheduler-ci-3815.2.4-a-f605b45a38" Jun 25 14:54:13.924553 kubelet[2843]: I0625 14:54:13.924544 2843 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5d6dfbc432e8332f9f9569b1d8ef77a3-k8s-certs\") pod \"kube-apiserver-ci-3815.2.4-a-f605b45a38\" (UID: \"5d6dfbc432e8332f9f9569b1d8ef77a3\") " pod="kube-system/kube-apiserver-ci-3815.2.4-a-f605b45a38" Jun 25 14:54:13.924624 kubelet[2843]: I0625 14:54:13.924616 2843 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5d6dfbc432e8332f9f9569b1d8ef77a3-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3815.2.4-a-f605b45a38\" (UID: \"5d6dfbc432e8332f9f9569b1d8ef77a3\") " pod="kube-system/kube-apiserver-ci-3815.2.4-a-f605b45a38" Jun 25 14:54:13.924698 kubelet[2843]: I0625 14:54:13.924690 2843 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/974a83ab3ac7fb6bccd14f63f6b3ce5c-k8s-certs\") pod \"kube-controller-manager-ci-3815.2.4-a-f605b45a38\" (UID: \"974a83ab3ac7fb6bccd14f63f6b3ce5c\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-a-f605b45a38" Jun 25 14:54:13.924777 kubelet[2843]: I0625 14:54:13.924769 2843 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/974a83ab3ac7fb6bccd14f63f6b3ce5c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3815.2.4-a-f605b45a38\" (UID: \"974a83ab3ac7fb6bccd14f63f6b3ce5c\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-a-f605b45a38" Jun 25 14:54:13.924881 kubelet[2843]: I0625 14:54:13.924871 2843 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5d6dfbc432e8332f9f9569b1d8ef77a3-ca-certs\") pod \"kube-apiserver-ci-3815.2.4-a-f605b45a38\" (UID: \"5d6dfbc432e8332f9f9569b1d8ef77a3\") " pod="kube-system/kube-apiserver-ci-3815.2.4-a-f605b45a38" Jun 25 14:54:13.924958 kubelet[2843]: I0625 14:54:13.924949 2843 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/974a83ab3ac7fb6bccd14f63f6b3ce5c-flexvolume-dir\") pod \"kube-controller-manager-ci-3815.2.4-a-f605b45a38\" (UID: \"974a83ab3ac7fb6bccd14f63f6b3ce5c\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-a-f605b45a38" Jun 25 14:54:13.926815 kubelet[2843]: W0625 14:54:13.926772 2843 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 14:54:14.498141 kubelet[2843]: I0625 14:54:14.498089 2843 apiserver.go:52] "Watching apiserver" Jun 25 14:54:14.520704 kubelet[2843]: I0625 14:54:14.520651 2843 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 14:54:14.597045 kubelet[2843]: I0625 14:54:14.597007 2843 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3815.2.4-a-f605b45a38" podStartSLOduration=1.596947219 podCreationTimestamp="2024-06-25 14:54:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 14:54:14.595244551 +0000 UTC m=+1.163401896" watchObservedRunningTime="2024-06-25 14:54:14.596947219 +0000 UTC m=+1.165104564" Jun 25 14:54:14.607768 kubelet[2843]: I0625 14:54:14.607737 2843 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3815.2.4-a-f605b45a38" podStartSLOduration=1.6076998040000001 podCreationTimestamp="2024-06-25 14:54:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 14:54:14.606997616 +0000 UTC m=+1.175154961" watchObservedRunningTime="2024-06-25 14:54:14.607699804 +0000 UTC m=+1.175857189" Jun 25 14:54:14.634160 kubelet[2843]: I0625 14:54:14.634122 2843 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3815.2.4-a-f605b45a38" podStartSLOduration=1.634069608 podCreationTimestamp="2024-06-25 14:54:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 14:54:14.622945968 +0000 UTC m=+1.191103313" watchObservedRunningTime="2024-06-25 14:54:14.634069608 +0000 UTC m=+1.202226953" Jun 25 14:54:15.446000 audit[2694]: AVC avc: denied { watch } for pid=2694 comm="kube-controller" path="/opt/libexec/kubernetes/kubelet-plugins/volume/exec" dev="sda9" ino=6772516 scontext=system_u:system_r:container_t:s0:c10,c345 tcontext=system_u:object_r:usr_t:s0 tclass=dir permissive=0 Jun 25 14:54:15.446000 audit[2694]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=9 a1=4000a95800 a2=fc6 a3=0 items=0 ppid=2545 pid=2694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c10,c345 key=(null) Jun 25 14:54:15.446000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:54:17.415811 sudo[1937]: pam_unix(sudo:session): session closed for user root Jun 25 14:54:17.417000 audit[1937]: USER_END pid=1937 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:54:17.417000 audit[1937]: CRED_DISP pid=1937 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:54:17.501139 sshd[1934]: pam_unix(sshd:session): session closed for user core Jun 25 14:54:17.501000 audit[1934]: USER_END pid=1934 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:17.501000 audit[1934]: CRED_DISP pid=1934 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:17.503751 systemd[1]: sshd@6-10.200.20.34:22-10.200.16.10:43414.service: Deactivated successfully. Jun 25 14:54:17.503000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.34:22-10.200.16.10:43414 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:17.504553 systemd[1]: session-9.scope: Deactivated successfully. Jun 25 14:54:17.504740 systemd[1]: session-9.scope: Consumed 6.267s CPU time. Jun 25 14:54:17.505198 systemd-logind[1476]: Session 9 logged out. Waiting for processes to exit. Jun 25 14:54:17.506038 systemd-logind[1476]: Removed session 9. Jun 25 14:54:26.024955 kernel: kauditd_printk_skb: 40 callbacks suppressed Jun 25 14:54:26.025099 kernel: audit: type=1400 audit(1719327266.019:386): avc: denied { watch } for pid=2694 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c10,c345 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:54:26.019000 audit[2694]: AVC avc: denied { watch } for pid=2694 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c10,c345 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:54:26.019000 audit[2694]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=40011fb320 a2=fc6 a3=0 items=0 ppid=2545 pid=2694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c10,c345 key=(null) Jun 25 14:54:26.067284 kernel: audit: type=1300 audit(1719327266.019:386): arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=40011fb320 a2=fc6 a3=0 items=0 ppid=2545 pid=2694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c10,c345 key=(null) Jun 25 14:54:26.019000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:54:26.089466 kernel: audit: type=1327 audit(1719327266.019:386): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:54:26.021000 audit[2694]: AVC avc: denied { watch } for pid=2694 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c10,c345 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:54:26.108717 kernel: audit: type=1400 audit(1719327266.021:387): avc: denied { watch } for pid=2694 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c10,c345 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:54:26.021000 audit[2694]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=b a1=40011fb4e0 a2=fc6 a3=0 items=0 ppid=2545 pid=2694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c10,c345 key=(null) Jun 25 14:54:26.132295 kernel: audit: type=1300 audit(1719327266.021:387): arch=c00000b7 syscall=27 success=no exit=-13 a0=b a1=40011fb4e0 a2=fc6 a3=0 items=0 ppid=2545 pid=2694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c10,c345 key=(null) Jun 25 14:54:26.021000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:54:26.153494 kernel: audit: type=1327 audit(1719327266.021:387): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:54:26.022000 audit[2694]: AVC avc: denied { watch } for pid=2694 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c10,c345 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:54:26.172254 kernel: audit: type=1400 audit(1719327266.022:388): avc: denied { watch } for pid=2694 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c10,c345 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:54:26.022000 audit[2694]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=b a1=40011fb6a0 a2=fc6 a3=0 items=0 ppid=2545 pid=2694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c10,c345 key=(null) Jun 25 14:54:26.196831 kernel: audit: type=1300 audit(1719327266.022:388): arch=c00000b7 syscall=27 success=no exit=-13 a0=b a1=40011fb6a0 a2=fc6 a3=0 items=0 ppid=2545 pid=2694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c10,c345 key=(null) Jun 25 14:54:26.022000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:54:26.218445 kernel: audit: type=1327 audit(1719327266.022:388): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:54:26.023000 audit[2694]: AVC avc: denied { watch } for pid=2694 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c10,c345 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:54:26.236744 kernel: audit: type=1400 audit(1719327266.023:389): avc: denied { watch } for pid=2694 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c10,c345 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:54:26.023000 audit[2694]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=4001303d60 a2=fc6 a3=0 items=0 ppid=2545 pid=2694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c10,c345 key=(null) Jun 25 14:54:26.023000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:54:26.640522 kubelet[2843]: I0625 14:54:26.640490 2843 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 25 14:54:26.641192 containerd[1492]: time="2024-06-25T14:54:26.641148975Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 25 14:54:26.641684 kubelet[2843]: I0625 14:54:26.641657 2843 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 25 14:54:27.076610 kubelet[2843]: I0625 14:54:27.076569 2843 topology_manager.go:215] "Topology Admit Handler" podUID="20f44b89-9971-490a-b771-8daf1f8085fe" podNamespace="kube-system" podName="kube-proxy-7p8lz" Jun 25 14:54:27.081579 systemd[1]: Created slice kubepods-besteffort-pod20f44b89_9971_490a_b771_8daf1f8085fe.slice - libcontainer container kubepods-besteffort-pod20f44b89_9971_490a_b771_8daf1f8085fe.slice. Jun 25 14:54:27.093566 kubelet[2843]: I0625 14:54:27.093533 2843 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpgns\" (UniqueName: \"kubernetes.io/projected/20f44b89-9971-490a-b771-8daf1f8085fe-kube-api-access-fpgns\") pod \"kube-proxy-7p8lz\" (UID: \"20f44b89-9971-490a-b771-8daf1f8085fe\") " pod="kube-system/kube-proxy-7p8lz" Jun 25 14:54:27.093672 kubelet[2843]: I0625 14:54:27.093585 2843 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/20f44b89-9971-490a-b771-8daf1f8085fe-xtables-lock\") pod \"kube-proxy-7p8lz\" (UID: \"20f44b89-9971-490a-b771-8daf1f8085fe\") " pod="kube-system/kube-proxy-7p8lz" Jun 25 14:54:27.093672 kubelet[2843]: I0625 14:54:27.093610 2843 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/20f44b89-9971-490a-b771-8daf1f8085fe-lib-modules\") pod \"kube-proxy-7p8lz\" (UID: \"20f44b89-9971-490a-b771-8daf1f8085fe\") " pod="kube-system/kube-proxy-7p8lz" Jun 25 14:54:27.093672 kubelet[2843]: I0625 14:54:27.093629 2843 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/20f44b89-9971-490a-b771-8daf1f8085fe-kube-proxy\") pod \"kube-proxy-7p8lz\" (UID: \"20f44b89-9971-490a-b771-8daf1f8085fe\") " pod="kube-system/kube-proxy-7p8lz" Jun 25 14:54:27.202164 kubelet[2843]: E0625 14:54:27.202133 2843 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jun 25 14:54:27.202342 kubelet[2843]: E0625 14:54:27.202329 2843 projected.go:198] Error preparing data for projected volume kube-api-access-fpgns for pod kube-system/kube-proxy-7p8lz: configmap "kube-root-ca.crt" not found Jun 25 14:54:27.202476 kubelet[2843]: E0625 14:54:27.202462 2843 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/20f44b89-9971-490a-b771-8daf1f8085fe-kube-api-access-fpgns podName:20f44b89-9971-490a-b771-8daf1f8085fe nodeName:}" failed. No retries permitted until 2024-06-25 14:54:27.702441239 +0000 UTC m=+14.270598584 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-fpgns" (UniqueName: "kubernetes.io/projected/20f44b89-9971-490a-b771-8daf1f8085fe-kube-api-access-fpgns") pod "kube-proxy-7p8lz" (UID: "20f44b89-9971-490a-b771-8daf1f8085fe") : configmap "kube-root-ca.crt" not found Jun 25 14:54:27.601070 kubelet[2843]: I0625 14:54:27.601020 2843 topology_manager.go:215] "Topology Admit Handler" podUID="310f9a83-d725-40b6-b0ea-e6d213a8e179" podNamespace="tigera-operator" podName="tigera-operator-76c4974c85-tqnts" Jun 25 14:54:27.606010 systemd[1]: Created slice kubepods-besteffort-pod310f9a83_d725_40b6_b0ea_e6d213a8e179.slice - libcontainer container kubepods-besteffort-pod310f9a83_d725_40b6_b0ea_e6d213a8e179.slice. Jun 25 14:54:27.697416 kubelet[2843]: I0625 14:54:27.697381 2843 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/310f9a83-d725-40b6-b0ea-e6d213a8e179-var-lib-calico\") pod \"tigera-operator-76c4974c85-tqnts\" (UID: \"310f9a83-d725-40b6-b0ea-e6d213a8e179\") " pod="tigera-operator/tigera-operator-76c4974c85-tqnts" Jun 25 14:54:27.697731 kubelet[2843]: I0625 14:54:27.697437 2843 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rkcf\" (UniqueName: \"kubernetes.io/projected/310f9a83-d725-40b6-b0ea-e6d213a8e179-kube-api-access-4rkcf\") pod \"tigera-operator-76c4974c85-tqnts\" (UID: \"310f9a83-d725-40b6-b0ea-e6d213a8e179\") " pod="tigera-operator/tigera-operator-76c4974c85-tqnts" Jun 25 14:54:27.910372 containerd[1492]: time="2024-06-25T14:54:27.909938278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-tqnts,Uid:310f9a83-d725-40b6-b0ea-e6d213a8e179,Namespace:tigera-operator,Attempt:0,}" Jun 25 14:54:27.972846 containerd[1492]: time="2024-06-25T14:54:27.972694019Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:54:27.972981 containerd[1492]: time="2024-06-25T14:54:27.972860223Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:54:27.972981 containerd[1492]: time="2024-06-25T14:54:27.972890864Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:54:27.972981 containerd[1492]: time="2024-06-25T14:54:27.972935626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:54:27.990830 containerd[1492]: time="2024-06-25T14:54:27.990759611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7p8lz,Uid:20f44b89-9971-490a-b771-8daf1f8085fe,Namespace:kube-system,Attempt:0,}" Jun 25 14:54:27.992954 systemd[1]: Started cri-containerd-ebb97c2bc9cbe3e5ffb34415bb3efa5a2dc5e3096eba9d7950526c740e874b89.scope - libcontainer container ebb97c2bc9cbe3e5ffb34415bb3efa5a2dc5e3096eba9d7950526c740e874b89. Jun 25 14:54:28.002000 audit: BPF prog-id=127 op=LOAD Jun 25 14:54:28.002000 audit: BPF prog-id=128 op=LOAD Jun 25 14:54:28.002000 audit[2942]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001398b0 a2=78 a3=0 items=0 ppid=2932 pid=2942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:28.002000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562623937633262633963626533653566666233343431356262336566 Jun 25 14:54:28.003000 audit: BPF prog-id=129 op=LOAD Jun 25 14:54:28.003000 audit[2942]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000139640 a2=78 a3=0 items=0 ppid=2932 pid=2942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:28.003000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562623937633262633963626533653566666233343431356262336566 Jun 25 14:54:28.003000 audit: BPF prog-id=129 op=UNLOAD Jun 25 14:54:28.003000 audit: BPF prog-id=128 op=UNLOAD Jun 25 14:54:28.003000 audit: BPF prog-id=130 op=LOAD Jun 25 14:54:28.003000 audit[2942]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000139b10 a2=78 a3=0 items=0 ppid=2932 pid=2942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:28.003000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562623937633262633963626533653566666233343431356262336566 Jun 25 14:54:28.023161 containerd[1492]: time="2024-06-25T14:54:28.023116594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-tqnts,Uid:310f9a83-d725-40b6-b0ea-e6d213a8e179,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ebb97c2bc9cbe3e5ffb34415bb3efa5a2dc5e3096eba9d7950526c740e874b89\"" Jun 25 14:54:28.025353 containerd[1492]: time="2024-06-25T14:54:28.025079249Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jun 25 14:54:28.035182 containerd[1492]: time="2024-06-25T14:54:28.035110647Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:54:28.035340 containerd[1492]: time="2024-06-25T14:54:28.035304132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:54:28.035436 containerd[1492]: time="2024-06-25T14:54:28.035413815Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:54:28.035516 containerd[1492]: time="2024-06-25T14:54:28.035495657Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:54:28.048959 systemd[1]: Started cri-containerd-9b92d55b59192dab62122a05ec979eecbe04ef767f8cad7af81683f243ac0e4b.scope - libcontainer container 9b92d55b59192dab62122a05ec979eecbe04ef767f8cad7af81683f243ac0e4b. Jun 25 14:54:28.055000 audit: BPF prog-id=131 op=LOAD Jun 25 14:54:28.056000 audit: BPF prog-id=132 op=LOAD Jun 25 14:54:28.056000 audit[2983]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018d8b0 a2=78 a3=0 items=0 ppid=2973 pid=2983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:28.056000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962393264353562353931393264616236323132326130356563393739 Jun 25 14:54:28.056000 audit: BPF prog-id=133 op=LOAD Jun 25 14:54:28.056000 audit[2983]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400018d640 a2=78 a3=0 items=0 ppid=2973 pid=2983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:28.056000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962393264353562353931393264616236323132326130356563393739 Jun 25 14:54:28.056000 audit: BPF prog-id=133 op=UNLOAD Jun 25 14:54:28.056000 audit: BPF prog-id=132 op=UNLOAD Jun 25 14:54:28.056000 audit: BPF prog-id=134 op=LOAD Jun 25 14:54:28.056000 audit[2983]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018db10 a2=78 a3=0 items=0 ppid=2973 pid=2983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:28.056000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962393264353562353931393264616236323132326130356563393739 Jun 25 14:54:28.068079 containerd[1492]: time="2024-06-25T14:54:28.068034638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7p8lz,Uid:20f44b89-9971-490a-b771-8daf1f8085fe,Namespace:kube-system,Attempt:0,} returns sandbox id \"9b92d55b59192dab62122a05ec979eecbe04ef767f8cad7af81683f243ac0e4b\"" Jun 25 14:54:28.071768 containerd[1492]: time="2024-06-25T14:54:28.071737301Z" level=info msg="CreateContainer within sandbox \"9b92d55b59192dab62122a05ec979eecbe04ef767f8cad7af81683f243ac0e4b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 25 14:54:28.110855 containerd[1492]: time="2024-06-25T14:54:28.110725901Z" level=info msg="CreateContainer within sandbox \"9b92d55b59192dab62122a05ec979eecbe04ef767f8cad7af81683f243ac0e4b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"34e8ee690a274ec0e945a66b2deb00a0edbbf784974ba57c1cd39156506ac2fa\"" Jun 25 14:54:28.112064 containerd[1492]: time="2024-06-25T14:54:28.112036977Z" level=info msg="StartContainer for \"34e8ee690a274ec0e945a66b2deb00a0edbbf784974ba57c1cd39156506ac2fa\"" Jun 25 14:54:28.132947 systemd[1]: Started cri-containerd-34e8ee690a274ec0e945a66b2deb00a0edbbf784974ba57c1cd39156506ac2fa.scope - libcontainer container 34e8ee690a274ec0e945a66b2deb00a0edbbf784974ba57c1cd39156506ac2fa. Jun 25 14:54:28.142000 audit: BPF prog-id=135 op=LOAD Jun 25 14:54:28.142000 audit[3014]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=40001398b0 a2=78 a3=0 items=0 ppid=2973 pid=3014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:28.142000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334653865653639306132373465633065393435613636623264656230 Jun 25 14:54:28.142000 audit: BPF prog-id=136 op=LOAD Jun 25 14:54:28.142000 audit[3014]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=4000139640 a2=78 a3=0 items=0 ppid=2973 pid=3014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:28.142000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334653865653639306132373465633065393435613636623264656230 Jun 25 14:54:28.142000 audit: BPF prog-id=136 op=UNLOAD Jun 25 14:54:28.143000 audit: BPF prog-id=135 op=UNLOAD Jun 25 14:54:28.143000 audit: BPF prog-id=137 op=LOAD Jun 25 14:54:28.143000 audit[3014]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=4000139b10 a2=78 a3=0 items=0 ppid=2973 pid=3014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:28.143000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334653865653639306132373465633065393435613636623264656230 Jun 25 14:54:28.159066 containerd[1492]: time="2024-06-25T14:54:28.159020438Z" level=info msg="StartContainer for \"34e8ee690a274ec0e945a66b2deb00a0edbbf784974ba57c1cd39156506ac2fa\" returns successfully" Jun 25 14:54:28.216000 audit[3070]: NETFILTER_CFG table=mangle:41 family=10 entries=1 op=nft_register_chain pid=3070 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:54:28.216000 audit[3070]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdfb779d0 a2=0 a3=1 items=0 ppid=3025 pid=3070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:28.216000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jun 25 14:54:28.220000 audit[3071]: NETFILTER_CFG table=nat:42 family=10 entries=1 op=nft_register_chain pid=3071 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:54:28.220000 audit[3071]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdaf75280 a2=0 a3=1 items=0 ppid=3025 pid=3071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:28.220000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jun 25 14:54:28.221000 audit[3069]: NETFILTER_CFG table=mangle:43 family=2 entries=1 op=nft_register_chain pid=3069 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:54:28.221000 audit[3069]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff97050f0 a2=0 a3=1 items=0 ppid=3025 pid=3069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:28.221000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jun 25 14:54:28.222000 audit[3072]: NETFILTER_CFG table=filter:44 family=10 entries=1 op=nft_register_chain pid=3072 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:54:28.222000 audit[3072]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffee68e3e0 a2=0 a3=1 items=0 ppid=3025 pid=3072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:28.222000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jun 25 14:54:28.222000 audit[3073]: NETFILTER_CFG table=nat:45 family=2 entries=1 op=nft_register_chain pid=3073 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:54:28.222000 audit[3073]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe23446b0 a2=0 a3=1 items=0 ppid=3025 pid=3073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:28.222000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jun 25 14:54:28.223000 audit[3074]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=3074 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:54:28.223000 audit[3074]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd3da0680 a2=0 a3=1 items=0 ppid=3025 pid=3074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:28.223000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jun 25 14:54:28.317000 audit[3075]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=3075 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:54:28.317000 audit[3075]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffc1454290 a2=0 a3=1 items=0 ppid=3025 pid=3075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:28.317000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jun 25 14:54:28.319000 audit[3077]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=3077 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:54:28.319000 audit[3077]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffc1924680 a2=0 a3=1 items=0 ppid=3025 pid=3077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:28.319000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jun 25 14:54:28.323000 audit[3080]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_rule pid=3080 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:54:28.323000 audit[3080]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffc341d200 a2=0 a3=1 items=0 ppid=3025 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:28.323000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jun 25 14:54:28.324000 audit[3081]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_chain pid=3081 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:54:28.324000 audit[3081]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc2b1f590 a2=0 a3=1 items=0 ppid=3025 pid=3081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:28.324000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jun 25 14:54:28.327000 audit[3083]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=3083 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:54:28.327000 audit[3083]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe0cde180 a2=0 a3=1 items=0 ppid=3025 pid=3083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:28.327000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jun 25 14:54:28.328000 audit[3084]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=3084 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:54:28.328000 audit[3084]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe72de1d0 a2=0 a3=1 items=0 ppid=3025 pid=3084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:28.328000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jun 25 14:54:28.330000 audit[3086]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=3086 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:54:28.330000 audit[3086]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffdf53f8b0 a2=0 a3=1 items=0 ppid=3025 pid=3086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:28.330000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jun 25 14:54:28.334000 audit[3089]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_rule pid=3089 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:54:28.334000 audit[3089]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffc8c43020 a2=0 a3=1 items=0 ppid=3025 pid=3089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:28.334000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jun 25 14:54:28.335000 audit[3090]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_chain pid=3090 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:54:28.335000 audit[3090]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd12c2000 a2=0 a3=1 items=0 ppid=3025 pid=3090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:28.335000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jun 25 14:54:28.338000 audit[3092]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=3092 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:54:28.338000 audit[3092]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffdc8bee30 a2=0 a3=1 items=0 ppid=3025 pid=3092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:28.338000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jun 25 14:54:28.339000 audit[3093]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_chain pid=3093 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:54:28.339000 audit[3093]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe47b7ec0 a2=0 a3=1 items=0 ppid=3025 pid=3093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:28.339000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jun 25 14:54:28.341000 audit[3095]: NETFILTER_CFG table=filter:58 family=2 entries=1 op=nft_register_rule pid=3095 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:54:28.341000 audit[3095]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffdbee4790 a2=0 a3=1 items=0 ppid=3025 pid=3095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:28.341000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 14:54:28.345000 audit[3098]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_rule pid=3098 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:54:28.345000 audit[3098]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe38faeb0 a2=0 a3=1 items=0 ppid=3025 pid=3098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:28.345000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 14:54:28.348000 audit[3101]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_rule pid=3101 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:54:28.348000 audit[3101]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffeee61bf0 a2=0 a3=1 items=0 ppid=3025 pid=3101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:28.348000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jun 25 14:54:28.350000 audit[3102]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=3102 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:54:28.350000 audit[3102]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff3972340 a2=0 a3=1 items=0 ppid=3025 pid=3102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:28.350000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jun 25 14:54:28.352000 audit[3104]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=3104 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:54:28.352000 audit[3104]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffdc0097f0 a2=0 a3=1 items=0 ppid=3025 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:28.352000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 14:54:28.356000 audit[3107]: NETFILTER_CFG table=nat:63 family=2 entries=1 op=nft_register_rule pid=3107 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:54:28.356000 audit[3107]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd325eb80 a2=0 a3=1 items=0 ppid=3025 pid=3107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:28.356000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 14:54:28.357000 audit[3108]: NETFILTER_CFG table=nat:64 family=2 entries=1 op=nft_register_chain pid=3108 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:54:28.357000 audit[3108]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff9f01320 a2=0 a3=1 items=0 ppid=3025 pid=3108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:28.357000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jun 25 14:54:28.360000 audit[3110]: NETFILTER_CFG table=nat:65 family=2 entries=1 op=nft_register_rule pid=3110 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:54:28.360000 audit[3110]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=fffff5f69a30 a2=0 a3=1 items=0 ppid=3025 pid=3110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:28.360000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jun 25 14:54:28.397000 audit[3116]: NETFILTER_CFG table=filter:66 family=2 entries=8 op=nft_register_rule pid=3116 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:54:28.397000 audit[3116]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5164 a0=3 a1=fffff40b1060 a2=0 a3=1 items=0 ppid=3025 pid=3116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:28.397000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:54:28.420000 audit[3116]: NETFILTER_CFG table=nat:67 family=2 entries=14 op=nft_register_chain pid=3116 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:54:28.420000 audit[3116]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=fffff40b1060 a2=0 a3=1 items=0 ppid=3025 pid=3116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:28.420000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:54:28.422000 audit[3121]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=3121 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:54:28.422000 audit[3121]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffdda46090 a2=0 a3=1 items=0 ppid=3025 pid=3121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:28.422000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jun 25 14:54:28.425000 audit[3123]: NETFILTER_CFG table=filter:69 family=10 entries=2 op=nft_register_chain pid=3123 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:54:28.425000 audit[3123]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=fffffc932760 a2=0 a3=1 items=0 ppid=3025 pid=3123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:28.425000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jun 25 14:54:28.428000 audit[3126]: NETFILTER_CFG table=filter:70 family=10 entries=2 op=nft_register_chain pid=3126 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:54:28.428000 audit[3126]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffe4766c10 a2=0 a3=1 items=0 ppid=3025 pid=3126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:28.428000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jun 25 14:54:28.430000 audit[3127]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_chain pid=3127 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:54:28.430000 audit[3127]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffca1e5cd0 a2=0 a3=1 items=0 ppid=3025 pid=3127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:28.430000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jun 25 14:54:28.432000 audit[3129]: NETFILTER_CFG table=filter:72 family=10 entries=1 op=nft_register_rule pid=3129 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:54:28.432000 audit[3129]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff0b967f0 a2=0 a3=1 items=0 ppid=3025 pid=3129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:28.432000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jun 25 14:54:28.433000 audit[3130]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=3130 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:54:28.433000 audit[3130]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc671cf00 a2=0 a3=1 items=0 ppid=3025 pid=3130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:28.433000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jun 25 14:54:28.436000 audit[3132]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=3132 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:54:28.436000 audit[3132]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffff9dcf1c0 a2=0 a3=1 items=0 ppid=3025 pid=3132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:28.436000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jun 25 14:54:28.439000 audit[3135]: NETFILTER_CFG table=filter:75 family=10 entries=2 op=nft_register_chain pid=3135 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:54:28.439000 audit[3135]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffd5430910 a2=0 a3=1 items=0 ppid=3025 pid=3135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:28.439000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jun 25 14:54:28.440000 audit[3136]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_chain pid=3136 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:54:28.440000 audit[3136]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff27f21a0 a2=0 a3=1 items=0 ppid=3025 pid=3136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:28.440000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jun 25 14:54:28.443000 audit[3138]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=3138 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:54:28.443000 audit[3138]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe16a81b0 a2=0 a3=1 items=0 ppid=3025 pid=3138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:28.443000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jun 25 14:54:28.444000 audit[3139]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_chain pid=3139 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:54:28.444000 audit[3139]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc7922b60 a2=0 a3=1 items=0 ppid=3025 pid=3139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:28.444000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jun 25 14:54:28.447000 audit[3141]: NETFILTER_CFG table=filter:79 family=10 entries=1 op=nft_register_rule pid=3141 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:54:28.447000 audit[3141]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd6931300 a2=0 a3=1 items=0 ppid=3025 pid=3141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:28.447000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 14:54:28.450000 audit[3144]: NETFILTER_CFG table=filter:80 family=10 entries=1 op=nft_register_rule pid=3144 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:54:28.450000 audit[3144]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffeeaa1990 a2=0 a3=1 items=0 ppid=3025 pid=3144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:28.450000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jun 25 14:54:28.454000 audit[3147]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_rule pid=3147 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:54:28.454000 audit[3147]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc53f6de0 a2=0 a3=1 items=0 ppid=3025 pid=3147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:28.454000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jun 25 14:54:28.455000 audit[3148]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=3148 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:54:28.455000 audit[3148]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd84f1d00 a2=0 a3=1 items=0 ppid=3025 pid=3148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:28.455000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jun 25 14:54:28.458000 audit[3150]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=3150 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:54:28.458000 audit[3150]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffd16bac40 a2=0 a3=1 items=0 ppid=3025 pid=3150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:28.458000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 14:54:28.461000 audit[3153]: NETFILTER_CFG table=nat:84 family=10 entries=2 op=nft_register_chain pid=3153 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:54:28.461000 audit[3153]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=ffffcc15f6d0 a2=0 a3=1 items=0 ppid=3025 pid=3153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:28.461000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 14:54:28.463000 audit[3154]: NETFILTER_CFG table=nat:85 family=10 entries=1 op=nft_register_chain pid=3154 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:54:28.463000 audit[3154]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc54f86b0 a2=0 a3=1 items=0 ppid=3025 pid=3154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:28.463000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jun 25 14:54:28.465000 audit[3156]: NETFILTER_CFG table=nat:86 family=10 entries=2 op=nft_register_chain pid=3156 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:54:28.465000 audit[3156]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffd2a2eff0 a2=0 a3=1 items=0 ppid=3025 pid=3156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:28.465000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jun 25 14:54:28.466000 audit[3157]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_chain pid=3157 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:54:28.466000 audit[3157]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe344ce70 a2=0 a3=1 items=0 ppid=3025 pid=3157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:28.466000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jun 25 14:54:28.469000 audit[3159]: NETFILTER_CFG table=filter:88 family=10 entries=1 op=nft_register_rule pid=3159 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:54:28.469000 audit[3159]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffff14e7e30 a2=0 a3=1 items=0 ppid=3025 pid=3159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:28.469000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 14:54:28.473000 audit[3162]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_rule pid=3162 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:54:28.473000 audit[3162]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffd8155b50 a2=0 a3=1 items=0 ppid=3025 pid=3162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:28.473000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 14:54:28.476000 audit[3164]: NETFILTER_CFG table=filter:90 family=10 entries=3 op=nft_register_rule pid=3164 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jun 25 14:54:28.476000 audit[3164]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2004 a0=3 a1=fffff5336940 a2=0 a3=1 items=0 ppid=3025 pid=3164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:28.476000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:54:28.476000 audit[3164]: NETFILTER_CFG table=nat:91 family=10 entries=7 op=nft_register_chain pid=3164 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jun 25 14:54:28.476000 audit[3164]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=fffff5336940 a2=0 a3=1 items=0 ppid=3025 pid=3164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:28.476000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:54:29.912137 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3930040633.mount: Deactivated successfully. Jun 25 14:54:30.407759 containerd[1492]: time="2024-06-25T14:54:30.407696399Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:54:30.409827 containerd[1492]: time="2024-06-25T14:54:30.409772694Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=19473650" Jun 25 14:54:30.414870 containerd[1492]: time="2024-06-25T14:54:30.414842027Z" level=info msg="ImageCreate event name:\"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:54:30.417873 containerd[1492]: time="2024-06-25T14:54:30.417837266Z" level=info msg="ImageUpdate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:54:30.422266 containerd[1492]: time="2024-06-25T14:54:30.422222262Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:54:30.423151 containerd[1492]: time="2024-06-25T14:54:30.423112646Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"19467821\" in 2.397990915s" Jun 25 14:54:30.423151 containerd[1492]: time="2024-06-25T14:54:30.423148047Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\"" Jun 25 14:54:30.426278 containerd[1492]: time="2024-06-25T14:54:30.426251688Z" level=info msg="CreateContainer within sandbox \"ebb97c2bc9cbe3e5ffb34415bb3efa5a2dc5e3096eba9d7950526c740e874b89\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jun 25 14:54:30.458424 containerd[1492]: time="2024-06-25T14:54:30.458384176Z" level=info msg="CreateContainer within sandbox \"ebb97c2bc9cbe3e5ffb34415bb3efa5a2dc5e3096eba9d7950526c740e874b89\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"334c0ff049b062e0b442fdfca60dcba0ab6eab9e6d32d103cf3a4911685e429e\"" Jun 25 14:54:30.460646 containerd[1492]: time="2024-06-25T14:54:30.459433764Z" level=info msg="StartContainer for \"334c0ff049b062e0b442fdfca60dcba0ab6eab9e6d32d103cf3a4911685e429e\"" Jun 25 14:54:30.482001 systemd[1]: Started cri-containerd-334c0ff049b062e0b442fdfca60dcba0ab6eab9e6d32d103cf3a4911685e429e.scope - libcontainer container 334c0ff049b062e0b442fdfca60dcba0ab6eab9e6d32d103cf3a4911685e429e. Jun 25 14:54:30.489000 audit: BPF prog-id=138 op=LOAD Jun 25 14:54:30.490000 audit: BPF prog-id=139 op=LOAD Jun 25 14:54:30.490000 audit[3181]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018d8b0 a2=78 a3=0 items=0 ppid=2932 pid=3181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:30.490000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333346330666630343962303632653062343432666466636136306463 Jun 25 14:54:30.490000 audit: BPF prog-id=140 op=LOAD Jun 25 14:54:30.490000 audit[3181]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400018d640 a2=78 a3=0 items=0 ppid=2932 pid=3181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:30.490000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333346330666630343962303632653062343432666466636136306463 Jun 25 14:54:30.490000 audit: BPF prog-id=140 op=UNLOAD Jun 25 14:54:30.490000 audit: BPF prog-id=139 op=UNLOAD Jun 25 14:54:30.490000 audit: BPF prog-id=141 op=LOAD Jun 25 14:54:30.490000 audit[3181]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018db10 a2=78 a3=0 items=0 ppid=2932 pid=3181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:30.490000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333346330666630343962303632653062343432666466636136306463 Jun 25 14:54:30.506284 containerd[1492]: time="2024-06-25T14:54:30.506236119Z" level=info msg="StartContainer for \"334c0ff049b062e0b442fdfca60dcba0ab6eab9e6d32d103cf3a4911685e429e\" returns successfully" Jun 25 14:54:30.597741 kubelet[2843]: I0625 14:54:30.597680 2843 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-7p8lz" podStartSLOduration=3.59761177 podCreationTimestamp="2024-06-25 14:54:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 14:54:28.592232116 +0000 UTC m=+15.160389461" watchObservedRunningTime="2024-06-25 14:54:30.59761177 +0000 UTC m=+17.165769115" Jun 25 14:54:33.548870 kubelet[2843]: I0625 14:54:33.548834 2843 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4974c85-tqnts" podStartSLOduration=4.149814822 podCreationTimestamp="2024-06-25 14:54:27 +0000 UTC" firstStartedPulling="2024-06-25 14:54:28.024441631 +0000 UTC m=+14.592598976" lastFinishedPulling="2024-06-25 14:54:30.423419974 +0000 UTC m=+16.991577319" observedRunningTime="2024-06-25 14:54:30.598063622 +0000 UTC m=+17.166220967" watchObservedRunningTime="2024-06-25 14:54:33.548793165 +0000 UTC m=+20.116950550" Jun 25 14:54:34.426000 audit[3214]: NETFILTER_CFG table=filter:92 family=2 entries=15 op=nft_register_rule pid=3214 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:54:34.432667 kernel: kauditd_printk_skb: 202 callbacks suppressed Jun 25 14:54:34.432729 kernel: audit: type=1325 audit(1719327274.426:464): table=filter:92 family=2 entries=15 op=nft_register_rule pid=3214 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:54:34.426000 audit[3214]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=ffffc1fe6010 a2=0 a3=1 items=0 ppid=3025 pid=3214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:34.471602 kernel: audit: type=1300 audit(1719327274.426:464): arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=ffffc1fe6010 a2=0 a3=1 items=0 ppid=3025 pid=3214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:34.426000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:54:34.485262 kernel: audit: type=1327 audit(1719327274.426:464): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:54:34.446000 audit[3214]: NETFILTER_CFG table=nat:93 family=2 entries=12 op=nft_register_rule pid=3214 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:54:34.497558 kernel: audit: type=1325 audit(1719327274.446:465): table=nat:93 family=2 entries=12 op=nft_register_rule pid=3214 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:54:34.446000 audit[3214]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc1fe6010 a2=0 a3=1 items=0 ppid=3025 pid=3214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:34.521196 kernel: audit: type=1300 audit(1719327274.446:465): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc1fe6010 a2=0 a3=1 items=0 ppid=3025 pid=3214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:34.446000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:54:34.531000 audit[3216]: NETFILTER_CFG table=filter:94 family=2 entries=16 op=nft_register_rule pid=3216 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:54:34.564174 kernel: audit: type=1327 audit(1719327274.446:465): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:54:34.564243 kernel: audit: type=1325 audit(1719327274.531:466): table=filter:94 family=2 entries=16 op=nft_register_rule pid=3216 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:54:34.531000 audit[3216]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=ffffd93f07d0 a2=0 a3=1 items=0 ppid=3025 pid=3216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:34.531000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:54:34.621611 kernel: audit: type=1300 audit(1719327274.531:466): arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=ffffd93f07d0 a2=0 a3=1 items=0 ppid=3025 pid=3216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:34.621732 kernel: audit: type=1327 audit(1719327274.531:466): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:54:34.532000 audit[3216]: NETFILTER_CFG table=nat:95 family=2 entries=12 op=nft_register_rule pid=3216 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:54:34.636847 kernel: audit: type=1325 audit(1719327274.532:467): table=nat:95 family=2 entries=12 op=nft_register_rule pid=3216 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:54:34.532000 audit[3216]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd93f07d0 a2=0 a3=1 items=0 ppid=3025 pid=3216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:34.532000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:54:34.643083 kubelet[2843]: I0625 14:54:34.643046 2843 topology_manager.go:215] "Topology Admit Handler" podUID="9fecff5b-8d76-44f3-ba53-6bacf502a915" podNamespace="calico-system" podName="calico-typha-c8bccf57f-t2qv7" Jun 25 14:54:34.647876 systemd[1]: Created slice kubepods-besteffort-pod9fecff5b_8d76_44f3_ba53_6bacf502a915.slice - libcontainer container kubepods-besteffort-pod9fecff5b_8d76_44f3_ba53_6bacf502a915.slice. Jun 25 14:54:34.748810 kubelet[2843]: I0625 14:54:34.748688 2843 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9fecff5b-8d76-44f3-ba53-6bacf502a915-tigera-ca-bundle\") pod \"calico-typha-c8bccf57f-t2qv7\" (UID: \"9fecff5b-8d76-44f3-ba53-6bacf502a915\") " pod="calico-system/calico-typha-c8bccf57f-t2qv7" Jun 25 14:54:34.748810 kubelet[2843]: I0625 14:54:34.748738 2843 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9fecff5b-8d76-44f3-ba53-6bacf502a915-typha-certs\") pod \"calico-typha-c8bccf57f-t2qv7\" (UID: \"9fecff5b-8d76-44f3-ba53-6bacf502a915\") " pod="calico-system/calico-typha-c8bccf57f-t2qv7" Jun 25 14:54:34.748810 kubelet[2843]: I0625 14:54:34.748760 2843 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4jsr\" (UniqueName: \"kubernetes.io/projected/9fecff5b-8d76-44f3-ba53-6bacf502a915-kube-api-access-x4jsr\") pod \"calico-typha-c8bccf57f-t2qv7\" (UID: \"9fecff5b-8d76-44f3-ba53-6bacf502a915\") " pod="calico-system/calico-typha-c8bccf57f-t2qv7" Jun 25 14:54:34.790817 kubelet[2843]: I0625 14:54:34.790769 2843 topology_manager.go:215] "Topology Admit Handler" podUID="bdcc6758-d360-4da6-92f3-42cba856bd81" podNamespace="calico-system" podName="calico-node-psfg2" Jun 25 14:54:34.795618 systemd[1]: Created slice kubepods-besteffort-podbdcc6758_d360_4da6_92f3_42cba856bd81.slice - libcontainer container kubepods-besteffort-podbdcc6758_d360_4da6_92f3_42cba856bd81.slice. Jun 25 14:54:34.849979 kubelet[2843]: I0625 14:54:34.849683 2843 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/bdcc6758-d360-4da6-92f3-42cba856bd81-node-certs\") pod \"calico-node-psfg2\" (UID: \"bdcc6758-d360-4da6-92f3-42cba856bd81\") " pod="calico-system/calico-node-psfg2" Jun 25 14:54:34.850187 kubelet[2843]: I0625 14:54:34.850173 2843 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bdcc6758-d360-4da6-92f3-42cba856bd81-lib-modules\") pod \"calico-node-psfg2\" (UID: \"bdcc6758-d360-4da6-92f3-42cba856bd81\") " pod="calico-system/calico-node-psfg2" Jun 25 14:54:34.850285 kubelet[2843]: I0625 14:54:34.850275 2843 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/bdcc6758-d360-4da6-92f3-42cba856bd81-var-run-calico\") pod \"calico-node-psfg2\" (UID: \"bdcc6758-d360-4da6-92f3-42cba856bd81\") " pod="calico-system/calico-node-psfg2" Jun 25 14:54:34.850526 kubelet[2843]: I0625 14:54:34.850499 2843 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/bdcc6758-d360-4da6-92f3-42cba856bd81-cni-bin-dir\") pod \"calico-node-psfg2\" (UID: \"bdcc6758-d360-4da6-92f3-42cba856bd81\") " pod="calico-system/calico-node-psfg2" Jun 25 14:54:34.850593 kubelet[2843]: I0625 14:54:34.850538 2843 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/bdcc6758-d360-4da6-92f3-42cba856bd81-cni-net-dir\") pod \"calico-node-psfg2\" (UID: \"bdcc6758-d360-4da6-92f3-42cba856bd81\") " pod="calico-system/calico-node-psfg2" Jun 25 14:54:34.850593 kubelet[2843]: I0625 14:54:34.850559 2843 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/bdcc6758-d360-4da6-92f3-42cba856bd81-cni-log-dir\") pod \"calico-node-psfg2\" (UID: \"bdcc6758-d360-4da6-92f3-42cba856bd81\") " pod="calico-system/calico-node-psfg2" Jun 25 14:54:34.850593 kubelet[2843]: I0625 14:54:34.850578 2843 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bdcc6758-d360-4da6-92f3-42cba856bd81-var-lib-calico\") pod \"calico-node-psfg2\" (UID: \"bdcc6758-d360-4da6-92f3-42cba856bd81\") " pod="calico-system/calico-node-psfg2" Jun 25 14:54:34.850676 kubelet[2843]: I0625 14:54:34.850600 2843 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nw2jj\" (UniqueName: \"kubernetes.io/projected/bdcc6758-d360-4da6-92f3-42cba856bd81-kube-api-access-nw2jj\") pod \"calico-node-psfg2\" (UID: \"bdcc6758-d360-4da6-92f3-42cba856bd81\") " pod="calico-system/calico-node-psfg2" Jun 25 14:54:34.850676 kubelet[2843]: I0625 14:54:34.850645 2843 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/bdcc6758-d360-4da6-92f3-42cba856bd81-policysync\") pod \"calico-node-psfg2\" (UID: \"bdcc6758-d360-4da6-92f3-42cba856bd81\") " pod="calico-system/calico-node-psfg2" Jun 25 14:54:34.850676 kubelet[2843]: I0625 14:54:34.850664 2843 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/bdcc6758-d360-4da6-92f3-42cba856bd81-flexvol-driver-host\") pod \"calico-node-psfg2\" (UID: \"bdcc6758-d360-4da6-92f3-42cba856bd81\") " pod="calico-system/calico-node-psfg2" Jun 25 14:54:34.850747 kubelet[2843]: I0625 14:54:34.850683 2843 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bdcc6758-d360-4da6-92f3-42cba856bd81-tigera-ca-bundle\") pod \"calico-node-psfg2\" (UID: \"bdcc6758-d360-4da6-92f3-42cba856bd81\") " pod="calico-system/calico-node-psfg2" Jun 25 14:54:34.850747 kubelet[2843]: I0625 14:54:34.850711 2843 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bdcc6758-d360-4da6-92f3-42cba856bd81-xtables-lock\") pod \"calico-node-psfg2\" (UID: \"bdcc6758-d360-4da6-92f3-42cba856bd81\") " pod="calico-system/calico-node-psfg2" Jun 25 14:54:34.916817 kubelet[2843]: I0625 14:54:34.916753 2843 topology_manager.go:215] "Topology Admit Handler" podUID="abc80f9d-37c5-4a3d-984d-c970bd8ec106" podNamespace="calico-system" podName="csi-node-driver-4gqsk" Jun 25 14:54:34.917066 kubelet[2843]: E0625 14:54:34.917031 2843 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4gqsk" podUID="abc80f9d-37c5-4a3d-984d-c970bd8ec106" Jun 25 14:54:34.951117 kubelet[2843]: I0625 14:54:34.951082 2843 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/abc80f9d-37c5-4a3d-984d-c970bd8ec106-varrun\") pod \"csi-node-driver-4gqsk\" (UID: \"abc80f9d-37c5-4a3d-984d-c970bd8ec106\") " pod="calico-system/csi-node-driver-4gqsk" Jun 25 14:54:34.951351 kubelet[2843]: I0625 14:54:34.951335 2843 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/abc80f9d-37c5-4a3d-984d-c970bd8ec106-socket-dir\") pod \"csi-node-driver-4gqsk\" (UID: \"abc80f9d-37c5-4a3d-984d-c970bd8ec106\") " pod="calico-system/csi-node-driver-4gqsk" Jun 25 14:54:34.951438 kubelet[2843]: I0625 14:54:34.951428 2843 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/abc80f9d-37c5-4a3d-984d-c970bd8ec106-registration-dir\") pod \"csi-node-driver-4gqsk\" (UID: \"abc80f9d-37c5-4a3d-984d-c970bd8ec106\") " pod="calico-system/csi-node-driver-4gqsk" Jun 25 14:54:34.951619 kubelet[2843]: I0625 14:54:34.951605 2843 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/abc80f9d-37c5-4a3d-984d-c970bd8ec106-kubelet-dir\") pod \"csi-node-driver-4gqsk\" (UID: \"abc80f9d-37c5-4a3d-984d-c970bd8ec106\") " pod="calico-system/csi-node-driver-4gqsk" Jun 25 14:54:34.953813 kubelet[2843]: E0625 14:54:34.953768 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:34.953954 kubelet[2843]: W0625 14:54:34.953937 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:34.954067 kubelet[2843]: E0625 14:54:34.954030 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:34.954376 kubelet[2843]: E0625 14:54:34.954359 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:34.954468 kubelet[2843]: W0625 14:54:34.954453 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:34.954532 kubelet[2843]: E0625 14:54:34.954523 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:34.955305 containerd[1492]: time="2024-06-25T14:54:34.955258432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-c8bccf57f-t2qv7,Uid:9fecff5b-8d76-44f3-ba53-6bacf502a915,Namespace:calico-system,Attempt:0,}" Jun 25 14:54:34.955833 kubelet[2843]: E0625 14:54:34.955816 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:34.955947 kubelet[2843]: W0625 14:54:34.955932 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:34.956030 kubelet[2843]: E0625 14:54:34.956020 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:34.962189 kubelet[2843]: E0625 14:54:34.962160 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:34.962309 kubelet[2843]: W0625 14:54:34.962293 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:34.962444 kubelet[2843]: E0625 14:54:34.962430 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:34.965693 kubelet[2843]: E0625 14:54:34.965673 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:34.966994 kubelet[2843]: W0625 14:54:34.966964 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:34.966994 kubelet[2843]: E0625 14:54:34.966995 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:34.967211 kubelet[2843]: E0625 14:54:34.967182 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:34.967211 kubelet[2843]: W0625 14:54:34.967198 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:34.967211 kubelet[2843]: E0625 14:54:34.967209 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:34.974900 kubelet[2843]: E0625 14:54:34.974852 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:34.975009 kubelet[2843]: W0625 14:54:34.974993 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:34.975121 kubelet[2843]: E0625 14:54:34.975109 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:34.978297 kubelet[2843]: E0625 14:54:34.978279 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:34.978412 kubelet[2843]: W0625 14:54:34.978398 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:34.978523 kubelet[2843]: E0625 14:54:34.978510 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:34.978949 kubelet[2843]: E0625 14:54:34.978919 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:34.978949 kubelet[2843]: W0625 14:54:34.978938 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:34.978949 kubelet[2843]: E0625 14:54:34.978966 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:34.979564 kubelet[2843]: E0625 14:54:34.979536 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:34.979564 kubelet[2843]: W0625 14:54:34.979555 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:34.979682 kubelet[2843]: E0625 14:54:34.979667 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:34.980057 kubelet[2843]: E0625 14:54:34.980024 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:34.980057 kubelet[2843]: W0625 14:54:34.980040 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:34.980167 kubelet[2843]: E0625 14:54:34.980135 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:34.980940 kubelet[2843]: E0625 14:54:34.980913 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:34.980940 kubelet[2843]: W0625 14:54:34.980930 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:34.981065 kubelet[2843]: E0625 14:54:34.981055 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:34.981091 kubelet[2843]: I0625 14:54:34.981085 2843 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fx2gz\" (UniqueName: \"kubernetes.io/projected/abc80f9d-37c5-4a3d-984d-c970bd8ec106-kube-api-access-fx2gz\") pod \"csi-node-driver-4gqsk\" (UID: \"abc80f9d-37c5-4a3d-984d-c970bd8ec106\") " pod="calico-system/csi-node-driver-4gqsk" Jun 25 14:54:34.981635 kubelet[2843]: E0625 14:54:34.981605 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:34.981635 kubelet[2843]: W0625 14:54:34.981622 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:34.981971 kubelet[2843]: E0625 14:54:34.981931 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:34.984296 kubelet[2843]: E0625 14:54:34.981996 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:34.984296 kubelet[2843]: W0625 14:54:34.982002 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:34.984296 kubelet[2843]: E0625 14:54:34.982065 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:34.984296 kubelet[2843]: E0625 14:54:34.982681 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:34.984296 kubelet[2843]: W0625 14:54:34.982693 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:34.984296 kubelet[2843]: E0625 14:54:34.982810 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:34.984296 kubelet[2843]: E0625 14:54:34.983888 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:34.984296 kubelet[2843]: W0625 14:54:34.983903 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:34.984296 kubelet[2843]: E0625 14:54:34.983929 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:34.984296 kubelet[2843]: E0625 14:54:34.984144 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:34.984550 kubelet[2843]: W0625 14:54:34.984155 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:34.984550 kubelet[2843]: E0625 14:54:34.984176 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:34.984809 kubelet[2843]: E0625 14:54:34.984770 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:34.984809 kubelet[2843]: W0625 14:54:34.984796 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:34.984908 kubelet[2843]: E0625 14:54:34.984815 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:34.985088 kubelet[2843]: E0625 14:54:34.985069 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:34.985088 kubelet[2843]: W0625 14:54:34.985084 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:34.985169 kubelet[2843]: E0625 14:54:34.985097 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:34.985331 kubelet[2843]: E0625 14:54:34.985314 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:34.985331 kubelet[2843]: W0625 14:54:34.985332 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:34.985418 kubelet[2843]: E0625 14:54:34.985345 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:35.019334 containerd[1492]: time="2024-06-25T14:54:35.019166717Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:54:35.019497 containerd[1492]: time="2024-06-25T14:54:35.019470724Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:54:35.019612 containerd[1492]: time="2024-06-25T14:54:35.019589607Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:54:35.019716 containerd[1492]: time="2024-06-25T14:54:35.019695010Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:54:35.036951 systemd[1]: Started cri-containerd-05a8cce5095b65af45f81a457c805b87d43a566601ef22c8aacc03d409028e68.scope - libcontainer container 05a8cce5095b65af45f81a457c805b87d43a566601ef22c8aacc03d409028e68. Jun 25 14:54:35.050000 audit: BPF prog-id=142 op=LOAD Jun 25 14:54:35.051000 audit: BPF prog-id=143 op=LOAD Jun 25 14:54:35.051000 audit[3260]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001318b0 a2=78 a3=0 items=0 ppid=3251 pid=3260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:35.051000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035613863636535303935623635616634356638316134353763383035 Jun 25 14:54:35.051000 audit: BPF prog-id=144 op=LOAD Jun 25 14:54:35.051000 audit[3260]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000131640 a2=78 a3=0 items=0 ppid=3251 pid=3260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:35.051000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035613863636535303935623635616634356638316134353763383035 Jun 25 14:54:35.051000 audit: BPF prog-id=144 op=UNLOAD Jun 25 14:54:35.052000 audit: BPF prog-id=143 op=UNLOAD Jun 25 14:54:35.052000 audit: BPF prog-id=145 op=LOAD Jun 25 14:54:35.052000 audit[3260]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000131b10 a2=78 a3=0 items=0 ppid=3251 pid=3260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:35.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035613863636535303935623635616634356638316134353763383035 Jun 25 14:54:35.084931 kubelet[2843]: E0625 14:54:35.084748 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:35.084931 kubelet[2843]: W0625 14:54:35.084775 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:35.084931 kubelet[2843]: E0625 14:54:35.084812 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:35.085439 kubelet[2843]: E0625 14:54:35.085197 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:35.085439 kubelet[2843]: W0625 14:54:35.085210 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:35.085439 kubelet[2843]: E0625 14:54:35.085236 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:35.085767 kubelet[2843]: E0625 14:54:35.085610 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:35.085767 kubelet[2843]: W0625 14:54:35.085623 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:35.085767 kubelet[2843]: E0625 14:54:35.085657 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:35.087030 kubelet[2843]: E0625 14:54:35.086869 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:35.087030 kubelet[2843]: W0625 14:54:35.086884 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:35.087030 kubelet[2843]: E0625 14:54:35.086912 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:35.087353 kubelet[2843]: E0625 14:54:35.087242 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:35.087353 kubelet[2843]: W0625 14:54:35.087254 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:35.087353 kubelet[2843]: E0625 14:54:35.087299 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:35.087673 kubelet[2843]: E0625 14:54:35.087531 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:35.087673 kubelet[2843]: W0625 14:54:35.087543 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:35.087673 kubelet[2843]: E0625 14:54:35.087647 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:35.087977 kubelet[2843]: E0625 14:54:35.087867 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:35.087977 kubelet[2843]: W0625 14:54:35.087878 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:35.087977 kubelet[2843]: E0625 14:54:35.087917 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:35.088261 kubelet[2843]: E0625 14:54:35.088149 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:35.088261 kubelet[2843]: W0625 14:54:35.088160 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:35.088261 kubelet[2843]: E0625 14:54:35.088200 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:35.088641 kubelet[2843]: E0625 14:54:35.088439 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:35.088641 kubelet[2843]: W0625 14:54:35.088450 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:35.088641 kubelet[2843]: E0625 14:54:35.088481 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:35.088986 kubelet[2843]: E0625 14:54:35.088818 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:35.088986 kubelet[2843]: W0625 14:54:35.088833 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:35.088986 kubelet[2843]: E0625 14:54:35.088847 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:35.089381 kubelet[2843]: E0625 14:54:35.089156 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:35.089381 kubelet[2843]: W0625 14:54:35.089169 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:35.089381 kubelet[2843]: E0625 14:54:35.089184 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:35.089740 kubelet[2843]: E0625 14:54:35.089533 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:35.089740 kubelet[2843]: W0625 14:54:35.089547 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:35.089740 kubelet[2843]: E0625 14:54:35.089561 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:35.090085 kubelet[2843]: E0625 14:54:35.089907 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:35.090085 kubelet[2843]: W0625 14:54:35.089922 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:35.090085 kubelet[2843]: E0625 14:54:35.089937 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:35.090360 kubelet[2843]: E0625 14:54:35.090253 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:35.090360 kubelet[2843]: W0625 14:54:35.090266 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:35.090360 kubelet[2843]: E0625 14:54:35.090279 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:35.090610 kubelet[2843]: E0625 14:54:35.090597 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:35.090828 kubelet[2843]: W0625 14:54:35.090693 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:35.090828 kubelet[2843]: E0625 14:54:35.090714 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:35.091045 kubelet[2843]: E0625 14:54:35.091032 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:35.091149 kubelet[2843]: W0625 14:54:35.091135 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:35.091226 kubelet[2843]: E0625 14:54:35.091216 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:35.092268 kubelet[2843]: E0625 14:54:35.092250 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:35.092438 kubelet[2843]: W0625 14:54:35.092422 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:35.092522 kubelet[2843]: E0625 14:54:35.092511 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:35.092770 kubelet[2843]: E0625 14:54:35.092756 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:35.092889 kubelet[2843]: W0625 14:54:35.092873 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:35.093044 kubelet[2843]: E0625 14:54:35.093031 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:35.093250 kubelet[2843]: E0625 14:54:35.093238 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:35.093342 kubelet[2843]: W0625 14:54:35.093328 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:35.093471 kubelet[2843]: E0625 14:54:35.093460 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:35.093935 kubelet[2843]: E0625 14:54:35.093914 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:35.094041 kubelet[2843]: W0625 14:54:35.094028 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:35.094266 kubelet[2843]: E0625 14:54:35.094215 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:35.094725 kubelet[2843]: E0625 14:54:35.094701 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:35.094912 kubelet[2843]: W0625 14:54:35.094896 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:35.094999 kubelet[2843]: E0625 14:54:35.094988 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:35.095404 kubelet[2843]: E0625 14:54:35.095390 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:35.097445 kubelet[2843]: W0625 14:54:35.097420 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:35.097596 kubelet[2843]: E0625 14:54:35.097583 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:35.098387 containerd[1492]: time="2024-06-25T14:54:35.098350415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-psfg2,Uid:bdcc6758-d360-4da6-92f3-42cba856bd81,Namespace:calico-system,Attempt:0,}" Jun 25 14:54:35.099758 containerd[1492]: time="2024-06-25T14:54:35.099727207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-c8bccf57f-t2qv7,Uid:9fecff5b-8d76-44f3-ba53-6bacf502a915,Namespace:calico-system,Attempt:0,} returns sandbox id \"05a8cce5095b65af45f81a457c805b87d43a566601ef22c8aacc03d409028e68\"" Jun 25 14:54:35.100017 kubelet[2843]: E0625 14:54:35.099991 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:35.100097 kubelet[2843]: W0625 14:54:35.100084 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:35.100168 kubelet[2843]: E0625 14:54:35.100159 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:35.101199 kubelet[2843]: E0625 14:54:35.101183 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:35.101305 kubelet[2843]: W0625 14:54:35.101290 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:35.101380 kubelet[2843]: E0625 14:54:35.101369 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:35.101913 kubelet[2843]: E0625 14:54:35.101896 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:35.102021 kubelet[2843]: W0625 14:54:35.102006 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:35.102087 kubelet[2843]: E0625 14:54:35.102077 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:35.102540 containerd[1492]: time="2024-06-25T14:54:35.102488192Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jun 25 14:54:35.116166 kubelet[2843]: E0625 14:54:35.116138 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:35.116283 kubelet[2843]: W0625 14:54:35.116269 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:35.116393 kubelet[2843]: E0625 14:54:35.116381 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:35.157301 containerd[1492]: time="2024-06-25T14:54:35.156995151Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:54:35.157301 containerd[1492]: time="2024-06-25T14:54:35.157088513Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:54:35.157301 containerd[1492]: time="2024-06-25T14:54:35.157123514Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:54:35.157301 containerd[1492]: time="2024-06-25T14:54:35.157167515Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:54:35.173945 systemd[1]: Started cri-containerd-4e8a848d35adfd56c1cfb1a1d11cce7803182bc19d9a4a3c3cc3844ddba929dd.scope - libcontainer container 4e8a848d35adfd56c1cfb1a1d11cce7803182bc19d9a4a3c3cc3844ddba929dd. Jun 25 14:54:35.184000 audit: BPF prog-id=146 op=LOAD Jun 25 14:54:35.185000 audit: BPF prog-id=147 op=LOAD Jun 25 14:54:35.185000 audit[3327]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001318b0 a2=78 a3=0 items=0 ppid=3317 pid=3327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:35.185000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465386138343864333561646664353663316366623161316431316363 Jun 25 14:54:35.185000 audit: BPF prog-id=148 op=LOAD Jun 25 14:54:35.185000 audit[3327]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000131640 a2=78 a3=0 items=0 ppid=3317 pid=3327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:35.185000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465386138343864333561646664353663316366623161316431316363 Jun 25 14:54:35.185000 audit: BPF prog-id=148 op=UNLOAD Jun 25 14:54:35.185000 audit: BPF prog-id=147 op=UNLOAD Jun 25 14:54:35.186000 audit: BPF prog-id=149 op=LOAD Jun 25 14:54:35.186000 audit[3327]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000131b10 a2=78 a3=0 items=0 ppid=3317 pid=3327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:35.186000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465386138343864333561646664353663316366623161316431316363 Jun 25 14:54:35.205843 containerd[1492]: time="2024-06-25T14:54:35.205800376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-psfg2,Uid:bdcc6758-d360-4da6-92f3-42cba856bd81,Namespace:calico-system,Attempt:0,} returns sandbox id \"4e8a848d35adfd56c1cfb1a1d11cce7803182bc19d9a4a3c3cc3844ddba929dd\"" Jun 25 14:54:35.573000 audit[3348]: NETFILTER_CFG table=filter:96 family=2 entries=16 op=nft_register_rule pid=3348 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:54:35.573000 audit[3348]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=fffffade4300 a2=0 a3=1 items=0 ppid=3025 pid=3348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:35.573000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:54:35.575000 audit[3348]: NETFILTER_CFG table=nat:97 family=2 entries=12 op=nft_register_rule pid=3348 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:54:35.575000 audit[3348]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffade4300 a2=0 a3=1 items=0 ppid=3025 pid=3348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:35.575000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:54:36.535123 kubelet[2843]: E0625 14:54:36.534656 2843 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4gqsk" podUID="abc80f9d-37c5-4a3d-984d-c970bd8ec106" Jun 25 14:54:36.677000 audit[3354]: NETFILTER_CFG table=filter:98 family=2 entries=16 op=nft_register_rule pid=3354 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:54:36.677000 audit[3354]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=ffffd1e529f0 a2=0 a3=1 items=0 ppid=3025 pid=3354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:36.677000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:54:36.679000 audit[3354]: NETFILTER_CFG table=nat:99 family=2 entries=12 op=nft_register_rule pid=3354 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:54:36.679000 audit[3354]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd1e529f0 a2=0 a3=1 items=0 ppid=3025 pid=3354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:36.679000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:54:37.005237 containerd[1492]: time="2024-06-25T14:54:37.005118298Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:54:37.008312 containerd[1492]: time="2024-06-25T14:54:37.008257328Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=27476513" Jun 25 14:54:37.015350 containerd[1492]: time="2024-06-25T14:54:37.015297046Z" level=info msg="ImageCreate event name:\"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:54:37.020433 containerd[1492]: time="2024-06-25T14:54:37.020388600Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:54:37.026471 containerd[1492]: time="2024-06-25T14:54:37.026432376Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:54:37.026991 containerd[1492]: time="2024-06-25T14:54:37.026949187Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"28843073\" in 1.924424874s" Jun 25 14:54:37.026991 containerd[1492]: time="2024-06-25T14:54:37.026986908Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\"" Jun 25 14:54:37.027958 containerd[1492]: time="2024-06-25T14:54:37.027924289Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jun 25 14:54:37.041100 containerd[1492]: time="2024-06-25T14:54:37.041060903Z" level=info msg="CreateContainer within sandbox \"05a8cce5095b65af45f81a457c805b87d43a566601ef22c8aacc03d409028e68\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 25 14:54:37.092200 containerd[1492]: time="2024-06-25T14:54:37.092148209Z" level=info msg="CreateContainer within sandbox \"05a8cce5095b65af45f81a457c805b87d43a566601ef22c8aacc03d409028e68\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"97dc78096967a759fe6c10e192cfd383bfbe427639397d877cc4b0175e918e1b\"" Jun 25 14:54:37.092934 containerd[1492]: time="2024-06-25T14:54:37.092907266Z" level=info msg="StartContainer for \"97dc78096967a759fe6c10e192cfd383bfbe427639397d877cc4b0175e918e1b\"" Jun 25 14:54:37.124028 systemd[1]: Started cri-containerd-97dc78096967a759fe6c10e192cfd383bfbe427639397d877cc4b0175e918e1b.scope - libcontainer container 97dc78096967a759fe6c10e192cfd383bfbe427639397d877cc4b0175e918e1b. Jun 25 14:54:37.137000 audit: BPF prog-id=150 op=LOAD Jun 25 14:54:37.137000 audit: BPF prog-id=151 op=LOAD Jun 25 14:54:37.137000 audit[3366]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001318b0 a2=78 a3=0 items=0 ppid=3251 pid=3366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:37.137000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937646337383039363936376137353966653663313065313932636664 Jun 25 14:54:37.137000 audit: BPF prog-id=152 op=LOAD Jun 25 14:54:37.137000 audit[3366]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000131640 a2=78 a3=0 items=0 ppid=3251 pid=3366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:37.137000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937646337383039363936376137353966653663313065313932636664 Jun 25 14:54:37.137000 audit: BPF prog-id=152 op=UNLOAD Jun 25 14:54:37.137000 audit: BPF prog-id=151 op=UNLOAD Jun 25 14:54:37.137000 audit: BPF prog-id=153 op=LOAD Jun 25 14:54:37.137000 audit[3366]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000131b10 a2=78 a3=0 items=0 ppid=3251 pid=3366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:37.137000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937646337383039363936376137353966653663313065313932636664 Jun 25 14:54:37.165389 containerd[1492]: time="2024-06-25T14:54:37.165330449Z" level=info msg="StartContainer for \"97dc78096967a759fe6c10e192cfd383bfbe427639397d877cc4b0175e918e1b\" returns successfully" Jun 25 14:54:37.610982 containerd[1492]: time="2024-06-25T14:54:37.610909637Z" level=info msg="StopContainer for \"97dc78096967a759fe6c10e192cfd383bfbe427639397d877cc4b0175e918e1b\" with timeout 300 (s)" Jun 25 14:54:37.611339 containerd[1492]: time="2024-06-25T14:54:37.611291526Z" level=info msg="Stop container \"97dc78096967a759fe6c10e192cfd383bfbe427639397d877cc4b0175e918e1b\" with signal terminated" Jun 25 14:54:37.626516 kubelet[2843]: I0625 14:54:37.626305 2843 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-c8bccf57f-t2qv7" podStartSLOduration=1.7011752919999998 podCreationTimestamp="2024-06-25 14:54:34 +0000 UTC" firstStartedPulling="2024-06-25 14:54:35.102233586 +0000 UTC m=+21.670390931" lastFinishedPulling="2024-06-25 14:54:37.027312435 +0000 UTC m=+23.595469740" observedRunningTime="2024-06-25 14:54:37.624910191 +0000 UTC m=+24.193067536" watchObservedRunningTime="2024-06-25 14:54:37.626254101 +0000 UTC m=+24.194411446" Jun 25 14:54:37.625000 audit: BPF prog-id=150 op=UNLOAD Jun 25 14:54:37.627215 systemd[1]: cri-containerd-97dc78096967a759fe6c10e192cfd383bfbe427639397d877cc4b0175e918e1b.scope: Deactivated successfully. Jun 25 14:54:37.627000 audit: BPF prog-id=153 op=UNLOAD Jun 25 14:54:38.032500 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-97dc78096967a759fe6c10e192cfd383bfbe427639397d877cc4b0175e918e1b-rootfs.mount: Deactivated successfully. Jun 25 14:54:38.534486 kubelet[2843]: E0625 14:54:38.534447 2843 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4gqsk" podUID="abc80f9d-37c5-4a3d-984d-c970bd8ec106" Jun 25 14:54:39.292663 containerd[1492]: time="2024-06-25T14:54:39.292591110Z" level=info msg="shim disconnected" id=97dc78096967a759fe6c10e192cfd383bfbe427639397d877cc4b0175e918e1b namespace=k8s.io Jun 25 14:54:39.293168 containerd[1492]: time="2024-06-25T14:54:39.293140522Z" level=warning msg="cleaning up after shim disconnected" id=97dc78096967a759fe6c10e192cfd383bfbe427639397d877cc4b0175e918e1b namespace=k8s.io Jun 25 14:54:39.293245 containerd[1492]: time="2024-06-25T14:54:39.293231004Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 14:54:39.308531 containerd[1492]: time="2024-06-25T14:54:39.308475131Z" level=info msg="StopContainer for \"97dc78096967a759fe6c10e192cfd383bfbe427639397d877cc4b0175e918e1b\" returns successfully" Jun 25 14:54:39.309378 containerd[1492]: time="2024-06-25T14:54:39.309340469Z" level=info msg="StopPodSandbox for \"05a8cce5095b65af45f81a457c805b87d43a566601ef22c8aacc03d409028e68\"" Jun 25 14:54:39.309461 containerd[1492]: time="2024-06-25T14:54:39.309402831Z" level=info msg="Container to stop \"97dc78096967a759fe6c10e192cfd383bfbe427639397d877cc4b0175e918e1b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 14:54:39.313199 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-05a8cce5095b65af45f81a457c805b87d43a566601ef22c8aacc03d409028e68-shm.mount: Deactivated successfully. Jun 25 14:54:39.317000 audit: BPF prog-id=142 op=UNLOAD Jun 25 14:54:39.318956 systemd[1]: cri-containerd-05a8cce5095b65af45f81a457c805b87d43a566601ef22c8aacc03d409028e68.scope: Deactivated successfully. Jun 25 14:54:39.321000 audit: BPF prog-id=145 op=UNLOAD Jun 25 14:54:39.341982 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-05a8cce5095b65af45f81a457c805b87d43a566601ef22c8aacc03d409028e68-rootfs.mount: Deactivated successfully. Jun 25 14:54:39.360053 containerd[1492]: time="2024-06-25T14:54:39.359983955Z" level=info msg="shim disconnected" id=05a8cce5095b65af45f81a457c805b87d43a566601ef22c8aacc03d409028e68 namespace=k8s.io Jun 25 14:54:39.360053 containerd[1492]: time="2024-06-25T14:54:39.360045596Z" level=warning msg="cleaning up after shim disconnected" id=05a8cce5095b65af45f81a457c805b87d43a566601ef22c8aacc03d409028e68 namespace=k8s.io Jun 25 14:54:39.360053 containerd[1492]: time="2024-06-25T14:54:39.360055516Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 14:54:39.372017 containerd[1492]: time="2024-06-25T14:54:39.371961572Z" level=info msg="TearDown network for sandbox \"05a8cce5095b65af45f81a457c805b87d43a566601ef22c8aacc03d409028e68\" successfully" Jun 25 14:54:39.372017 containerd[1492]: time="2024-06-25T14:54:39.372004653Z" level=info msg="StopPodSandbox for \"05a8cce5095b65af45f81a457c805b87d43a566601ef22c8aacc03d409028e68\" returns successfully" Jun 25 14:54:39.396018 kubelet[2843]: I0625 14:54:39.395454 2843 topology_manager.go:215] "Topology Admit Handler" podUID="76c657d0-4a62-4e88-910f-368765a4c1b0" podNamespace="calico-system" podName="calico-typha-6bf468d948-8xtwb" Jun 25 14:54:39.396018 kubelet[2843]: E0625 14:54:39.395530 2843 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9fecff5b-8d76-44f3-ba53-6bacf502a915" containerName="calico-typha" Jun 25 14:54:39.396018 kubelet[2843]: I0625 14:54:39.395556 2843 memory_manager.go:346] "RemoveStaleState removing state" podUID="9fecff5b-8d76-44f3-ba53-6bacf502a915" containerName="calico-typha" Jun 25 14:54:39.400845 systemd[1]: Created slice kubepods-besteffort-pod76c657d0_4a62_4e88_910f_368765a4c1b0.slice - libcontainer container kubepods-besteffort-pod76c657d0_4a62_4e88_910f_368765a4c1b0.slice. Jun 25 14:54:39.417000 audit[3463]: NETFILTER_CFG table=filter:100 family=2 entries=16 op=nft_register_rule pid=3463 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:54:39.417000 audit[3463]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=ffffd6fbb320 a2=0 a3=1 items=0 ppid=3025 pid=3463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:39.417000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:54:39.417000 audit[3463]: NETFILTER_CFG table=nat:101 family=2 entries=12 op=nft_register_rule pid=3463 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:54:39.417000 audit[3463]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd6fbb320 a2=0 a3=1 items=0 ppid=3025 pid=3463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:39.417000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:54:39.427000 audit[3465]: NETFILTER_CFG table=filter:102 family=2 entries=16 op=nft_register_rule pid=3465 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:54:39.433358 kernel: kauditd_printk_skb: 60 callbacks suppressed Jun 25 14:54:39.433450 kernel: audit: type=1325 audit(1719327279.427:496): table=filter:102 family=2 entries=16 op=nft_register_rule pid=3465 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:54:39.427000 audit[3465]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=ffffffe55670 a2=0 a3=1 items=0 ppid=3025 pid=3465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:39.468765 kernel: audit: type=1300 audit(1719327279.427:496): arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=ffffffe55670 a2=0 a3=1 items=0 ppid=3025 pid=3465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:39.427000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:54:39.478140 kubelet[2843]: E0625 14:54:39.477978 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:39.478140 kubelet[2843]: W0625 14:54:39.478001 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:39.478140 kubelet[2843]: E0625 14:54:39.478028 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:39.483016 kernel: audit: type=1327 audit(1719327279.427:496): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:54:39.428000 audit[3465]: NETFILTER_CFG table=nat:103 family=2 entries=12 op=nft_register_rule pid=3465 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:54:39.484900 kubelet[2843]: E0625 14:54:39.484532 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:39.484900 kubelet[2843]: W0625 14:54:39.484550 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:39.484900 kubelet[2843]: E0625 14:54:39.484572 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:39.485651 kubelet[2843]: E0625 14:54:39.485497 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:39.485651 kubelet[2843]: W0625 14:54:39.485512 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:39.485651 kubelet[2843]: E0625 14:54:39.485550 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:39.495689 kernel: audit: type=1325 audit(1719327279.428:497): table=nat:103 family=2 entries=12 op=nft_register_rule pid=3465 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:54:39.428000 audit[3465]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffffe55670 a2=0 a3=1 items=0 ppid=3025 pid=3465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:39.497098 kubelet[2843]: E0625 14:54:39.497083 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:39.497195 kubelet[2843]: W0625 14:54:39.497180 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:39.497267 kubelet[2843]: E0625 14:54:39.497257 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:39.500078 kubelet[2843]: E0625 14:54:39.500062 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:39.500206 kubelet[2843]: W0625 14:54:39.500192 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:39.500277 kubelet[2843]: E0625 14:54:39.500267 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:39.504979 kubelet[2843]: E0625 14:54:39.504959 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:39.505132 kubelet[2843]: W0625 14:54:39.505116 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:39.505210 kubelet[2843]: E0625 14:54:39.505200 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:39.508973 kubelet[2843]: E0625 14:54:39.508954 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:39.509116 kubelet[2843]: W0625 14:54:39.509101 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:39.509207 kubelet[2843]: E0625 14:54:39.509196 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:39.512953 kubelet[2843]: E0625 14:54:39.512936 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:39.513074 kubelet[2843]: W0625 14:54:39.513060 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:39.513149 kubelet[2843]: E0625 14:54:39.513139 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:39.513431 kubelet[2843]: E0625 14:54:39.513419 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:39.513535 kubelet[2843]: W0625 14:54:39.513521 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:39.513605 kubelet[2843]: E0625 14:54:39.513595 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:39.517001 kubelet[2843]: E0625 14:54:39.516986 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:39.517112 kubelet[2843]: W0625 14:54:39.517099 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:39.517191 kubelet[2843]: E0625 14:54:39.517181 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:39.519240 kernel: audit: type=1300 audit(1719327279.428:497): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffffe55670 a2=0 a3=1 items=0 ppid=3025 pid=3465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:39.428000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:54:39.521035 kubelet[2843]: E0625 14:54:39.521023 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:39.521128 kubelet[2843]: W0625 14:54:39.521115 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:39.521209 kubelet[2843]: E0625 14:54:39.521199 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:39.527995 kubelet[2843]: E0625 14:54:39.527975 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:39.528151 kubelet[2843]: W0625 14:54:39.528136 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:39.528238 kubelet[2843]: E0625 14:54:39.528227 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:39.529758 kubelet[2843]: E0625 14:54:39.529743 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:39.529893 kubelet[2843]: W0625 14:54:39.529879 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:39.529968 kubelet[2843]: E0625 14:54:39.529958 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:39.530057 kubelet[2843]: I0625 14:54:39.530048 2843 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9fecff5b-8d76-44f3-ba53-6bacf502a915-typha-certs\") pod \"9fecff5b-8d76-44f3-ba53-6bacf502a915\" (UID: \"9fecff5b-8d76-44f3-ba53-6bacf502a915\") " Jun 25 14:54:39.530827 kubelet[2843]: E0625 14:54:39.530814 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:39.530945 kubelet[2843]: W0625 14:54:39.530931 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:39.531076 kubelet[2843]: E0625 14:54:39.531065 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:39.531258 kernel: audit: type=1327 audit(1719327279.428:497): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:54:39.531943 kubelet[2843]: E0625 14:54:39.531929 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:39.532037 kubelet[2843]: W0625 14:54:39.532024 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:39.532104 kubelet[2843]: E0625 14:54:39.532095 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:39.532187 kubelet[2843]: I0625 14:54:39.532177 2843 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9fecff5b-8d76-44f3-ba53-6bacf502a915-tigera-ca-bundle\") pod \"9fecff5b-8d76-44f3-ba53-6bacf502a915\" (UID: \"9fecff5b-8d76-44f3-ba53-6bacf502a915\") " Jun 25 14:54:39.532450 kubelet[2843]: E0625 14:54:39.532438 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:39.532566 kubelet[2843]: W0625 14:54:39.532555 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:39.532633 kubelet[2843]: E0625 14:54:39.532623 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:39.532712 kubelet[2843]: I0625 14:54:39.532703 2843 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4jsr\" (UniqueName: \"kubernetes.io/projected/9fecff5b-8d76-44f3-ba53-6bacf502a915-kube-api-access-x4jsr\") pod \"9fecff5b-8d76-44f3-ba53-6bacf502a915\" (UID: \"9fecff5b-8d76-44f3-ba53-6bacf502a915\") " Jun 25 14:54:39.532968 kubelet[2843]: E0625 14:54:39.532957 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:39.533058 kubelet[2843]: W0625 14:54:39.533045 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:39.533129 kubelet[2843]: E0625 14:54:39.533120 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:39.533197 kubelet[2843]: I0625 14:54:39.533188 2843 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76c657d0-4a62-4e88-910f-368765a4c1b0-tigera-ca-bundle\") pod \"calico-typha-6bf468d948-8xtwb\" (UID: \"76c657d0-4a62-4e88-910f-368765a4c1b0\") " pod="calico-system/calico-typha-6bf468d948-8xtwb" Jun 25 14:54:39.533421 kubelet[2843]: E0625 14:54:39.533410 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:39.533513 kubelet[2843]: W0625 14:54:39.533488 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:39.533576 kubelet[2843]: E0625 14:54:39.533565 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:39.533648 kubelet[2843]: I0625 14:54:39.533639 2843 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8pgw\" (UniqueName: \"kubernetes.io/projected/76c657d0-4a62-4e88-910f-368765a4c1b0-kube-api-access-w8pgw\") pod \"calico-typha-6bf468d948-8xtwb\" (UID: \"76c657d0-4a62-4e88-910f-368765a4c1b0\") " pod="calico-system/calico-typha-6bf468d948-8xtwb" Jun 25 14:54:39.533887 kubelet[2843]: E0625 14:54:39.533875 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:39.534072 kubelet[2843]: W0625 14:54:39.534058 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:39.534143 kubelet[2843]: E0625 14:54:39.534134 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:39.534206 kubelet[2843]: I0625 14:54:39.534197 2843 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/76c657d0-4a62-4e88-910f-368765a4c1b0-typha-certs\") pod \"calico-typha-6bf468d948-8xtwb\" (UID: \"76c657d0-4a62-4e88-910f-368765a4c1b0\") " pod="calico-system/calico-typha-6bf468d948-8xtwb" Jun 25 14:54:39.537336 systemd[1]: var-lib-kubelet-pods-9fecff5b\x2d8d76\x2d44f3\x2dba53\x2d6bacf502a915-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Jun 25 14:54:39.538315 kubelet[2843]: I0625 14:54:39.538292 2843 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fecff5b-8d76-44f3-ba53-6bacf502a915-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "9fecff5b-8d76-44f3-ba53-6bacf502a915" (UID: "9fecff5b-8d76-44f3-ba53-6bacf502a915"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jun 25 14:54:39.541209 systemd[1]: var-lib-kubelet-pods-9fecff5b\x2d8d76\x2d44f3\x2dba53\x2d6bacf502a915-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Jun 25 14:54:39.546749 kubelet[2843]: E0625 14:54:39.542894 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:39.546749 kubelet[2843]: W0625 14:54:39.542914 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:39.546749 kubelet[2843]: E0625 14:54:39.542936 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:39.546749 kubelet[2843]: E0625 14:54:39.543523 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:39.546749 kubelet[2843]: W0625 14:54:39.543534 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:39.546749 kubelet[2843]: E0625 14:54:39.543560 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:39.546749 kubelet[2843]: I0625 14:54:39.543877 2843 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fecff5b-8d76-44f3-ba53-6bacf502a915-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "9fecff5b-8d76-44f3-ba53-6bacf502a915" (UID: "9fecff5b-8d76-44f3-ba53-6bacf502a915"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 25 14:54:39.547565 kubelet[2843]: E0625 14:54:39.547544 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:39.547630 kubelet[2843]: W0625 14:54:39.547617 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:39.547697 kubelet[2843]: E0625 14:54:39.547687 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:39.548089 kubelet[2843]: E0625 14:54:39.548074 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:39.548184 kubelet[2843]: W0625 14:54:39.548172 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:39.548260 kubelet[2843]: E0625 14:54:39.548249 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:39.549662 kubelet[2843]: E0625 14:54:39.549644 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:39.549822 kubelet[2843]: W0625 14:54:39.549770 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:39.549946 kubelet[2843]: E0625 14:54:39.549936 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:39.550374 kubelet[2843]: E0625 14:54:39.550358 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:39.550481 kubelet[2843]: W0625 14:54:39.550467 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:39.550546 kubelet[2843]: E0625 14:54:39.550537 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:39.550859 kubelet[2843]: E0625 14:54:39.550846 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:39.550950 kubelet[2843]: W0625 14:54:39.550937 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:39.551015 kubelet[2843]: E0625 14:54:39.551005 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:39.553734 systemd[1]: var-lib-kubelet-pods-9fecff5b\x2d8d76\x2d44f3\x2dba53\x2d6bacf502a915-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dx4jsr.mount: Deactivated successfully. Jun 25 14:54:39.555529 kubelet[2843]: I0625 14:54:39.555485 2843 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fecff5b-8d76-44f3-ba53-6bacf502a915-kube-api-access-x4jsr" (OuterVolumeSpecName: "kube-api-access-x4jsr") pod "9fecff5b-8d76-44f3-ba53-6bacf502a915" (UID: "9fecff5b-8d76-44f3-ba53-6bacf502a915"). InnerVolumeSpecName "kube-api-access-x4jsr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 25 14:54:39.555766 kubelet[2843]: E0625 14:54:39.555755 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:39.555888 kubelet[2843]: W0625 14:54:39.555861 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:39.555976 kubelet[2843]: E0625 14:54:39.555966 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:39.616808 kubelet[2843]: I0625 14:54:39.614967 2843 scope.go:117] "RemoveContainer" containerID="97dc78096967a759fe6c10e192cfd383bfbe427639397d877cc4b0175e918e1b" Jun 25 14:54:39.618110 containerd[1492]: time="2024-06-25T14:54:39.618068727Z" level=info msg="RemoveContainer for \"97dc78096967a759fe6c10e192cfd383bfbe427639397d877cc4b0175e918e1b\"" Jun 25 14:54:39.621429 systemd[1]: Removed slice kubepods-besteffort-pod9fecff5b_8d76_44f3_ba53_6bacf502a915.slice - libcontainer container kubepods-besteffort-pod9fecff5b_8d76_44f3_ba53_6bacf502a915.slice. Jun 25 14:54:39.636369 containerd[1492]: time="2024-06-25T14:54:39.636298998Z" level=info msg="RemoveContainer for \"97dc78096967a759fe6c10e192cfd383bfbe427639397d877cc4b0175e918e1b\" returns successfully" Jun 25 14:54:39.636735 kubelet[2843]: E0625 14:54:39.636717 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:39.636892 kubelet[2843]: W0625 14:54:39.636874 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:39.636966 kubelet[2843]: E0625 14:54:39.636955 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:39.637215 kubelet[2843]: I0625 14:54:39.637186 2843 scope.go:117] "RemoveContainer" containerID="97dc78096967a759fe6c10e192cfd383bfbe427639397d877cc4b0175e918e1b" Jun 25 14:54:39.637869 kubelet[2843]: E0625 14:54:39.637852 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:39.637981 kubelet[2843]: W0625 14:54:39.637967 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:39.638066 containerd[1492]: time="2024-06-25T14:54:39.637981114Z" level=error msg="ContainerStatus for \"97dc78096967a759fe6c10e192cfd383bfbe427639397d877cc4b0175e918e1b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"97dc78096967a759fe6c10e192cfd383bfbe427639397d877cc4b0175e918e1b\": not found" Jun 25 14:54:39.638114 kubelet[2843]: E0625 14:54:39.638048 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:39.638927 kubelet[2843]: E0625 14:54:39.638910 2843 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"97dc78096967a759fe6c10e192cfd383bfbe427639397d877cc4b0175e918e1b\": not found" containerID="97dc78096967a759fe6c10e192cfd383bfbe427639397d877cc4b0175e918e1b" Jun 25 14:54:39.639131 kubelet[2843]: I0625 14:54:39.639104 2843 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"97dc78096967a759fe6c10e192cfd383bfbe427639397d877cc4b0175e918e1b"} err="failed to get container status \"97dc78096967a759fe6c10e192cfd383bfbe427639397d877cc4b0175e918e1b\": rpc error: code = NotFound desc = an error occurred when try to find container \"97dc78096967a759fe6c10e192cfd383bfbe427639397d877cc4b0175e918e1b\": not found" Jun 25 14:54:39.639245 kubelet[2843]: E0625 14:54:39.639234 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:39.639322 kubelet[2843]: W0625 14:54:39.639311 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:39.639405 kubelet[2843]: E0625 14:54:39.639395 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:39.639642 kubelet[2843]: I0625 14:54:39.639617 2843 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-x4jsr\" (UniqueName: \"kubernetes.io/projected/9fecff5b-8d76-44f3-ba53-6bacf502a915-kube-api-access-x4jsr\") on node \"ci-3815.2.4-a-f605b45a38\" DevicePath \"\"" Jun 25 14:54:39.639700 kubelet[2843]: I0625 14:54:39.639647 2843 reconciler_common.go:300] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9fecff5b-8d76-44f3-ba53-6bacf502a915-typha-certs\") on node \"ci-3815.2.4-a-f605b45a38\" DevicePath \"\"" Jun 25 14:54:39.639700 kubelet[2843]: I0625 14:54:39.639658 2843 reconciler_common.go:300] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9fecff5b-8d76-44f3-ba53-6bacf502a915-tigera-ca-bundle\") on node \"ci-3815.2.4-a-f605b45a38\" DevicePath \"\"" Jun 25 14:54:39.639868 kubelet[2843]: E0625 14:54:39.639855 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:39.639945 kubelet[2843]: W0625 14:54:39.639933 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:39.640019 kubelet[2843]: E0625 14:54:39.640010 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:39.642350 kubelet[2843]: E0625 14:54:39.641976 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:39.642350 kubelet[2843]: W0625 14:54:39.642009 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:39.642350 kubelet[2843]: E0625 14:54:39.642120 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:39.642350 kubelet[2843]: E0625 14:54:39.642260 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:39.642350 kubelet[2843]: W0625 14:54:39.642269 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:39.642350 kubelet[2843]: E0625 14:54:39.642346 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:39.643339 kubelet[2843]: E0625 14:54:39.643303 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:39.643339 kubelet[2843]: W0625 14:54:39.643332 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:39.643486 kubelet[2843]: E0625 14:54:39.643462 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:39.643835 kubelet[2843]: E0625 14:54:39.643616 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:39.643835 kubelet[2843]: W0625 14:54:39.643629 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:39.643835 kubelet[2843]: E0625 14:54:39.643711 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:39.644132 kubelet[2843]: E0625 14:54:39.644098 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:39.644132 kubelet[2843]: W0625 14:54:39.644131 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:39.644261 kubelet[2843]: E0625 14:54:39.644235 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:39.645813 kubelet[2843]: E0625 14:54:39.644457 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:39.645813 kubelet[2843]: W0625 14:54:39.644479 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:39.645813 kubelet[2843]: E0625 14:54:39.644503 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:39.645813 kubelet[2843]: E0625 14:54:39.644687 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:39.645813 kubelet[2843]: W0625 14:54:39.644696 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:39.645813 kubelet[2843]: E0625 14:54:39.644710 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:39.645813 kubelet[2843]: E0625 14:54:39.644945 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:39.645813 kubelet[2843]: W0625 14:54:39.644953 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:39.645813 kubelet[2843]: E0625 14:54:39.645030 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:39.646066 kubelet[2843]: E0625 14:54:39.645836 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:39.646066 kubelet[2843]: W0625 14:54:39.645854 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:39.646066 kubelet[2843]: E0625 14:54:39.645942 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:39.649079 kubelet[2843]: E0625 14:54:39.649041 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:39.649079 kubelet[2843]: W0625 14:54:39.649070 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:39.649192 kubelet[2843]: E0625 14:54:39.649186 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:39.651201 kubelet[2843]: E0625 14:54:39.651172 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:39.651201 kubelet[2843]: W0625 14:54:39.651194 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:39.651354 kubelet[2843]: E0625 14:54:39.651222 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:39.653815 kubelet[2843]: E0625 14:54:39.653037 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:39.653815 kubelet[2843]: W0625 14:54:39.653073 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:39.653815 kubelet[2843]: E0625 14:54:39.653098 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:39.653815 kubelet[2843]: E0625 14:54:39.653533 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:39.653815 kubelet[2843]: W0625 14:54:39.653554 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:39.653815 kubelet[2843]: E0625 14:54:39.653568 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:39.668631 kubelet[2843]: E0625 14:54:39.668493 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:54:39.668631 kubelet[2843]: W0625 14:54:39.668516 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:54:39.668631 kubelet[2843]: E0625 14:54:39.668538 2843 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:54:39.706316 containerd[1492]: time="2024-06-25T14:54:39.706260857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6bf468d948-8xtwb,Uid:76c657d0-4a62-4e88-910f-368765a4c1b0,Namespace:calico-system,Attempt:0,}" Jun 25 14:54:39.765409 containerd[1492]: time="2024-06-25T14:54:39.765361524Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:54:39.768794 containerd[1492]: time="2024-06-25T14:54:39.768699715Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=4916009" Jun 25 14:54:39.769054 containerd[1492]: time="2024-06-25T14:54:39.768945921Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:54:39.769054 containerd[1492]: time="2024-06-25T14:54:39.769015722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:54:39.769142 containerd[1492]: time="2024-06-25T14:54:39.769036523Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:54:39.769142 containerd[1492]: time="2024-06-25T14:54:39.769084884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:54:39.773757 containerd[1492]: time="2024-06-25T14:54:39.773699943Z" level=info msg="ImageCreate event name:\"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:54:39.777608 containerd[1492]: time="2024-06-25T14:54:39.777575426Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:54:39.782493 containerd[1492]: time="2024-06-25T14:54:39.782457650Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:54:39.783249 containerd[1492]: time="2024-06-25T14:54:39.783210947Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6282537\" in 2.755161294s" Jun 25 14:54:39.783597 containerd[1492]: time="2024-06-25T14:54:39.783568954Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\"" Jun 25 14:54:39.789475 containerd[1492]: time="2024-06-25T14:54:39.785445954Z" level=info msg="CreateContainer within sandbox \"4e8a848d35adfd56c1cfb1a1d11cce7803182bc19d9a4a3c3cc3844ddba929dd\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 25 14:54:39.786344 systemd[1]: Started cri-containerd-819ec8a4e1a67d6c4c7509810c35dc2fac8c5e9509b9fccd4f66f53597aa3dd8.scope - libcontainer container 819ec8a4e1a67d6c4c7509810c35dc2fac8c5e9509b9fccd4f66f53597aa3dd8. Jun 25 14:54:39.804000 audit: BPF prog-id=154 op=LOAD Jun 25 14:54:39.814819 kernel: audit: type=1334 audit(1719327279.804:498): prog-id=154 op=LOAD Jun 25 14:54:39.813000 audit: BPF prog-id=155 op=LOAD Jun 25 14:54:39.813000 audit[3538]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001338b0 a2=78 a3=0 items=0 ppid=3529 pid=3538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:39.844744 kernel: audit: type=1334 audit(1719327279.813:499): prog-id=155 op=LOAD Jun 25 14:54:39.844895 kernel: audit: type=1300 audit(1719327279.813:499): arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001338b0 a2=78 a3=0 items=0 ppid=3529 pid=3538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:39.813000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831396563386134653161363764366334633735303938313063333564 Jun 25 14:54:39.864265 containerd[1492]: time="2024-06-25T14:54:39.864211963Z" level=info msg="CreateContainer within sandbox \"4e8a848d35adfd56c1cfb1a1d11cce7803182bc19d9a4a3c3cc3844ddba929dd\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"fd569a3d2fea1a982bd7cea35de128c3a68fd04d5a11727ccaf891a78c783601\"" Jun 25 14:54:39.869788 kernel: audit: type=1327 audit(1719327279.813:499): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831396563386134653161363764366334633735303938313063333564 Jun 25 14:54:39.870385 containerd[1492]: time="2024-06-25T14:54:39.870342614Z" level=info msg="StartContainer for \"fd569a3d2fea1a982bd7cea35de128c3a68fd04d5a11727ccaf891a78c783601\"" Jun 25 14:54:39.813000 audit: BPF prog-id=156 op=LOAD Jun 25 14:54:39.813000 audit[3538]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000133640 a2=78 a3=0 items=0 ppid=3529 pid=3538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:39.813000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831396563386134653161363764366334633735303938313063333564 Jun 25 14:54:39.813000 audit: BPF prog-id=156 op=UNLOAD Jun 25 14:54:39.813000 audit: BPF prog-id=155 op=UNLOAD Jun 25 14:54:39.813000 audit: BPF prog-id=157 op=LOAD Jun 25 14:54:39.813000 audit[3538]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000133b10 a2=78 a3=0 items=0 ppid=3529 pid=3538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:39.813000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831396563386134653161363764366334633735303938313063333564 Jun 25 14:54:39.894827 containerd[1492]: time="2024-06-25T14:54:39.894749417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6bf468d948-8xtwb,Uid:76c657d0-4a62-4e88-910f-368765a4c1b0,Namespace:calico-system,Attempt:0,} returns sandbox id \"819ec8a4e1a67d6c4c7509810c35dc2fac8c5e9509b9fccd4f66f53597aa3dd8\"" Jun 25 14:54:39.904284 containerd[1492]: time="2024-06-25T14:54:39.904238221Z" level=info msg="CreateContainer within sandbox \"819ec8a4e1a67d6c4c7509810c35dc2fac8c5e9509b9fccd4f66f53597aa3dd8\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 25 14:54:39.917011 systemd[1]: Started cri-containerd-fd569a3d2fea1a982bd7cea35de128c3a68fd04d5a11727ccaf891a78c783601.scope - libcontainer container fd569a3d2fea1a982bd7cea35de128c3a68fd04d5a11727ccaf891a78c783601. Jun 25 14:54:39.931000 audit: BPF prog-id=158 op=LOAD Jun 25 14:54:39.931000 audit[3566]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=400018d8b0 a2=78 a3=0 items=0 ppid=3317 pid=3566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:39.931000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664353639613364326665613161393832626437636561333564653132 Jun 25 14:54:39.931000 audit: BPF prog-id=159 op=LOAD Jun 25 14:54:39.931000 audit[3566]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=400018d640 a2=78 a3=0 items=0 ppid=3317 pid=3566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:39.931000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664353639613364326665613161393832626437636561333564653132 Jun 25 14:54:39.931000 audit: BPF prog-id=159 op=UNLOAD Jun 25 14:54:39.931000 audit: BPF prog-id=158 op=UNLOAD Jun 25 14:54:39.931000 audit: BPF prog-id=160 op=LOAD Jun 25 14:54:39.931000 audit[3566]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=400018db10 a2=78 a3=0 items=0 ppid=3317 pid=3566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:39.931000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664353639613364326665613161393832626437636561333564653132 Jun 25 14:54:39.952119 containerd[1492]: time="2024-06-25T14:54:39.952063966Z" level=info msg="StartContainer for \"fd569a3d2fea1a982bd7cea35de128c3a68fd04d5a11727ccaf891a78c783601\" returns successfully" Jun 25 14:54:39.962632 containerd[1492]: time="2024-06-25T14:54:39.962565311Z" level=info msg="CreateContainer within sandbox \"819ec8a4e1a67d6c4c7509810c35dc2fac8c5e9509b9fccd4f66f53597aa3dd8\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"baf1d6355d8fb401548bfd64a2382f65836061340a5994f2dc6735465390df8c\"" Jun 25 14:54:39.963360 containerd[1492]: time="2024-06-25T14:54:39.963329967Z" level=info msg="StartContainer for \"baf1d6355d8fb401548bfd64a2382f65836061340a5994f2dc6735465390df8c\"" Jun 25 14:54:39.967516 systemd[1]: cri-containerd-fd569a3d2fea1a982bd7cea35de128c3a68fd04d5a11727ccaf891a78c783601.scope: Deactivated successfully. Jun 25 14:54:39.969000 audit: BPF prog-id=160 op=UNLOAD Jun 25 14:54:39.990961 systemd[1]: Started cri-containerd-baf1d6355d8fb401548bfd64a2382f65836061340a5994f2dc6735465390df8c.scope - libcontainer container baf1d6355d8fb401548bfd64a2382f65836061340a5994f2dc6735465390df8c. Jun 25 14:54:40.021000 audit: BPF prog-id=161 op=LOAD Jun 25 14:54:40.021000 audit: BPF prog-id=162 op=LOAD Jun 25 14:54:40.021000 audit[3609]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018d8b0 a2=78 a3=0 items=0 ppid=3529 pid=3609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:40.021000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261663164363335356438666234303135343862666436346132333832 Jun 25 14:54:40.022000 audit: BPF prog-id=163 op=LOAD Jun 25 14:54:40.022000 audit[3609]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400018d640 a2=78 a3=0 items=0 ppid=3529 pid=3609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:40.022000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261663164363335356438666234303135343862666436346132333832 Jun 25 14:54:40.022000 audit: BPF prog-id=163 op=UNLOAD Jun 25 14:54:40.022000 audit: BPF prog-id=162 op=UNLOAD Jun 25 14:54:40.022000 audit: BPF prog-id=164 op=LOAD Jun 25 14:54:40.022000 audit[3609]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018db10 a2=78 a3=0 items=0 ppid=3529 pid=3609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:40.022000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261663164363335356438666234303135343862666436346132333832 Jun 25 14:54:40.226011 containerd[1492]: time="2024-06-25T14:54:40.225897730Z" level=info msg="StartContainer for \"baf1d6355d8fb401548bfd64a2382f65836061340a5994f2dc6735465390df8c\" returns successfully" Jun 25 14:54:40.264362 containerd[1492]: time="2024-06-25T14:54:40.264290375Z" level=info msg="shim disconnected" id=fd569a3d2fea1a982bd7cea35de128c3a68fd04d5a11727ccaf891a78c783601 namespace=k8s.io Jun 25 14:54:40.264362 containerd[1492]: time="2024-06-25T14:54:40.264354177Z" level=warning msg="cleaning up after shim disconnected" id=fd569a3d2fea1a982bd7cea35de128c3a68fd04d5a11727ccaf891a78c783601 namespace=k8s.io Jun 25 14:54:40.264362 containerd[1492]: time="2024-06-25T14:54:40.264363177Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 14:54:40.534991 kubelet[2843]: E0625 14:54:40.534955 2843 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4gqsk" podUID="abc80f9d-37c5-4a3d-984d-c970bd8ec106" Jun 25 14:54:40.618992 containerd[1492]: time="2024-06-25T14:54:40.618939291Z" level=info msg="StopPodSandbox for \"4e8a848d35adfd56c1cfb1a1d11cce7803182bc19d9a4a3c3cc3844ddba929dd\"" Jun 25 14:54:40.619371 containerd[1492]: time="2024-06-25T14:54:40.619000252Z" level=info msg="Container to stop \"fd569a3d2fea1a982bd7cea35de128c3a68fd04d5a11727ccaf891a78c783601\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 14:54:40.623257 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4e8a848d35adfd56c1cfb1a1d11cce7803182bc19d9a4a3c3cc3844ddba929dd-shm.mount: Deactivated successfully. Jun 25 14:54:40.628000 audit: BPF prog-id=146 op=UNLOAD Jun 25 14:54:40.630480 systemd[1]: cri-containerd-4e8a848d35adfd56c1cfb1a1d11cce7803182bc19d9a4a3c3cc3844ddba929dd.scope: Deactivated successfully. Jun 25 14:54:40.632000 audit: BPF prog-id=149 op=UNLOAD Jun 25 14:54:40.662099 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4e8a848d35adfd56c1cfb1a1d11cce7803182bc19d9a4a3c3cc3844ddba929dd-rootfs.mount: Deactivated successfully. Jun 25 14:54:40.666374 kubelet[2843]: I0625 14:54:40.666319 2843 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-6bf468d948-8xtwb" podStartSLOduration=4.666281604 podCreationTimestamp="2024-06-25 14:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 14:54:40.666180882 +0000 UTC m=+27.234338227" watchObservedRunningTime="2024-06-25 14:54:40.666281604 +0000 UTC m=+27.234438989" Jun 25 14:54:40.675703 containerd[1492]: time="2024-06-25T14:54:40.675644760Z" level=info msg="shim disconnected" id=4e8a848d35adfd56c1cfb1a1d11cce7803182bc19d9a4a3c3cc3844ddba929dd namespace=k8s.io Jun 25 14:54:40.675957 containerd[1492]: time="2024-06-25T14:54:40.675936686Z" level=warning msg="cleaning up after shim disconnected" id=4e8a848d35adfd56c1cfb1a1d11cce7803182bc19d9a4a3c3cc3844ddba929dd namespace=k8s.io Jun 25 14:54:40.676027 containerd[1492]: time="2024-06-25T14:54:40.676013288Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 14:54:40.689358 containerd[1492]: time="2024-06-25T14:54:40.689309607Z" level=info msg="TearDown network for sandbox \"4e8a848d35adfd56c1cfb1a1d11cce7803182bc19d9a4a3c3cc3844ddba929dd\" successfully" Jun 25 14:54:40.689568 containerd[1492]: time="2024-06-25T14:54:40.689547092Z" level=info msg="StopPodSandbox for \"4e8a848d35adfd56c1cfb1a1d11cce7803182bc19d9a4a3c3cc3844ddba929dd\" returns successfully" Jun 25 14:54:40.754838 kubelet[2843]: I0625 14:54:40.754801 2843 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bdcc6758-d360-4da6-92f3-42cba856bd81-xtables-lock\") pod \"bdcc6758-d360-4da6-92f3-42cba856bd81\" (UID: \"bdcc6758-d360-4da6-92f3-42cba856bd81\") " Jun 25 14:54:40.754838 kubelet[2843]: I0625 14:54:40.754850 2843 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bdcc6758-d360-4da6-92f3-42cba856bd81-tigera-ca-bundle\") pod \"bdcc6758-d360-4da6-92f3-42cba856bd81\" (UID: \"bdcc6758-d360-4da6-92f3-42cba856bd81\") " Jun 25 14:54:40.755019 kubelet[2843]: I0625 14:54:40.754874 2843 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/bdcc6758-d360-4da6-92f3-42cba856bd81-node-certs\") pod \"bdcc6758-d360-4da6-92f3-42cba856bd81\" (UID: \"bdcc6758-d360-4da6-92f3-42cba856bd81\") " Jun 25 14:54:40.755019 kubelet[2843]: I0625 14:54:40.754893 2843 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/bdcc6758-d360-4da6-92f3-42cba856bd81-cni-bin-dir\") pod \"bdcc6758-d360-4da6-92f3-42cba856bd81\" (UID: \"bdcc6758-d360-4da6-92f3-42cba856bd81\") " Jun 25 14:54:40.755019 kubelet[2843]: I0625 14:54:40.754911 2843 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/bdcc6758-d360-4da6-92f3-42cba856bd81-cni-net-dir\") pod \"bdcc6758-d360-4da6-92f3-42cba856bd81\" (UID: \"bdcc6758-d360-4da6-92f3-42cba856bd81\") " Jun 25 14:54:40.755019 kubelet[2843]: I0625 14:54:40.754935 2843 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nw2jj\" (UniqueName: \"kubernetes.io/projected/bdcc6758-d360-4da6-92f3-42cba856bd81-kube-api-access-nw2jj\") pod \"bdcc6758-d360-4da6-92f3-42cba856bd81\" (UID: \"bdcc6758-d360-4da6-92f3-42cba856bd81\") " Jun 25 14:54:40.755019 kubelet[2843]: I0625 14:54:40.754953 2843 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/bdcc6758-d360-4da6-92f3-42cba856bd81-flexvol-driver-host\") pod \"bdcc6758-d360-4da6-92f3-42cba856bd81\" (UID: \"bdcc6758-d360-4da6-92f3-42cba856bd81\") " Jun 25 14:54:40.755019 kubelet[2843]: I0625 14:54:40.754972 2843 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/bdcc6758-d360-4da6-92f3-42cba856bd81-var-run-calico\") pod \"bdcc6758-d360-4da6-92f3-42cba856bd81\" (UID: \"bdcc6758-d360-4da6-92f3-42cba856bd81\") " Jun 25 14:54:40.755167 kubelet[2843]: I0625 14:54:40.755003 2843 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bdcc6758-d360-4da6-92f3-42cba856bd81-var-lib-calico\") pod \"bdcc6758-d360-4da6-92f3-42cba856bd81\" (UID: \"bdcc6758-d360-4da6-92f3-42cba856bd81\") " Jun 25 14:54:40.755167 kubelet[2843]: I0625 14:54:40.755033 2843 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bdcc6758-d360-4da6-92f3-42cba856bd81-lib-modules\") pod \"bdcc6758-d360-4da6-92f3-42cba856bd81\" (UID: \"bdcc6758-d360-4da6-92f3-42cba856bd81\") " Jun 25 14:54:40.755167 kubelet[2843]: I0625 14:54:40.755053 2843 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/bdcc6758-d360-4da6-92f3-42cba856bd81-policysync\") pod \"bdcc6758-d360-4da6-92f3-42cba856bd81\" (UID: \"bdcc6758-d360-4da6-92f3-42cba856bd81\") " Jun 25 14:54:40.755167 kubelet[2843]: I0625 14:54:40.755072 2843 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/bdcc6758-d360-4da6-92f3-42cba856bd81-cni-log-dir\") pod \"bdcc6758-d360-4da6-92f3-42cba856bd81\" (UID: \"bdcc6758-d360-4da6-92f3-42cba856bd81\") " Jun 25 14:54:40.755338 kubelet[2843]: I0625 14:54:40.755310 2843 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bdcc6758-d360-4da6-92f3-42cba856bd81-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "bdcc6758-d360-4da6-92f3-42cba856bd81" (UID: "bdcc6758-d360-4da6-92f3-42cba856bd81"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 14:54:40.755391 kubelet[2843]: I0625 14:54:40.755308 2843 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bdcc6758-d360-4da6-92f3-42cba856bd81-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "bdcc6758-d360-4da6-92f3-42cba856bd81" (UID: "bdcc6758-d360-4da6-92f3-42cba856bd81"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 14:54:40.755391 kubelet[2843]: I0625 14:54:40.755368 2843 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bdcc6758-d360-4da6-92f3-42cba856bd81-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "bdcc6758-d360-4da6-92f3-42cba856bd81" (UID: "bdcc6758-d360-4da6-92f3-42cba856bd81"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 14:54:40.755391 kubelet[2843]: I0625 14:54:40.755386 2843 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bdcc6758-d360-4da6-92f3-42cba856bd81-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "bdcc6758-d360-4da6-92f3-42cba856bd81" (UID: "bdcc6758-d360-4da6-92f3-42cba856bd81"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 14:54:40.755471 kubelet[2843]: I0625 14:54:40.755401 2843 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bdcc6758-d360-4da6-92f3-42cba856bd81-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "bdcc6758-d360-4da6-92f3-42cba856bd81" (UID: "bdcc6758-d360-4da6-92f3-42cba856bd81"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 14:54:40.755471 kubelet[2843]: I0625 14:54:40.755415 2843 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bdcc6758-d360-4da6-92f3-42cba856bd81-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "bdcc6758-d360-4da6-92f3-42cba856bd81" (UID: "bdcc6758-d360-4da6-92f3-42cba856bd81"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 14:54:40.755471 kubelet[2843]: I0625 14:54:40.755428 2843 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bdcc6758-d360-4da6-92f3-42cba856bd81-policysync" (OuterVolumeSpecName: "policysync") pod "bdcc6758-d360-4da6-92f3-42cba856bd81" (UID: "bdcc6758-d360-4da6-92f3-42cba856bd81"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 14:54:40.756397 kubelet[2843]: I0625 14:54:40.756369 2843 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdcc6758-d360-4da6-92f3-42cba856bd81-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "bdcc6758-d360-4da6-92f3-42cba856bd81" (UID: "bdcc6758-d360-4da6-92f3-42cba856bd81"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 25 14:54:40.756582 kubelet[2843]: I0625 14:54:40.756513 2843 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bdcc6758-d360-4da6-92f3-42cba856bd81-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "bdcc6758-d360-4da6-92f3-42cba856bd81" (UID: "bdcc6758-d360-4da6-92f3-42cba856bd81"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 14:54:40.756668 kubelet[2843]: I0625 14:54:40.756527 2843 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bdcc6758-d360-4da6-92f3-42cba856bd81-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "bdcc6758-d360-4da6-92f3-42cba856bd81" (UID: "bdcc6758-d360-4da6-92f3-42cba856bd81"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 14:54:40.759182 systemd[1]: var-lib-kubelet-pods-bdcc6758\x2dd360\x2d4da6\x2d92f3\x2d42cba856bd81-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Jun 25 14:54:40.760447 kubelet[2843]: I0625 14:54:40.760410 2843 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdcc6758-d360-4da6-92f3-42cba856bd81-node-certs" (OuterVolumeSpecName: "node-certs") pod "bdcc6758-d360-4da6-92f3-42cba856bd81" (UID: "bdcc6758-d360-4da6-92f3-42cba856bd81"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jun 25 14:54:40.762914 systemd[1]: var-lib-kubelet-pods-bdcc6758\x2dd360\x2d4da6\x2d92f3\x2d42cba856bd81-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnw2jj.mount: Deactivated successfully. Jun 25 14:54:40.764032 kubelet[2843]: I0625 14:54:40.763997 2843 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdcc6758-d360-4da6-92f3-42cba856bd81-kube-api-access-nw2jj" (OuterVolumeSpecName: "kube-api-access-nw2jj") pod "bdcc6758-d360-4da6-92f3-42cba856bd81" (UID: "bdcc6758-d360-4da6-92f3-42cba856bd81"). InnerVolumeSpecName "kube-api-access-nw2jj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 25 14:54:40.855516 kubelet[2843]: I0625 14:54:40.855411 2843 reconciler_common.go:300] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/bdcc6758-d360-4da6-92f3-42cba856bd81-node-certs\") on node \"ci-3815.2.4-a-f605b45a38\" DevicePath \"\"" Jun 25 14:54:40.855516 kubelet[2843]: I0625 14:54:40.855443 2843 reconciler_common.go:300] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/bdcc6758-d360-4da6-92f3-42cba856bd81-cni-bin-dir\") on node \"ci-3815.2.4-a-f605b45a38\" DevicePath \"\"" Jun 25 14:54:40.855516 kubelet[2843]: I0625 14:54:40.855454 2843 reconciler_common.go:300] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/bdcc6758-d360-4da6-92f3-42cba856bd81-cni-net-dir\") on node \"ci-3815.2.4-a-f605b45a38\" DevicePath \"\"" Jun 25 14:54:40.855516 kubelet[2843]: I0625 14:54:40.855476 2843 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-nw2jj\" (UniqueName: \"kubernetes.io/projected/bdcc6758-d360-4da6-92f3-42cba856bd81-kube-api-access-nw2jj\") on node \"ci-3815.2.4-a-f605b45a38\" DevicePath \"\"" Jun 25 14:54:40.855516 kubelet[2843]: I0625 14:54:40.855487 2843 reconciler_common.go:300] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/bdcc6758-d360-4da6-92f3-42cba856bd81-flexvol-driver-host\") on node \"ci-3815.2.4-a-f605b45a38\" DevicePath \"\"" Jun 25 14:54:40.855516 kubelet[2843]: I0625 14:54:40.855497 2843 reconciler_common.go:300] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/bdcc6758-d360-4da6-92f3-42cba856bd81-var-run-calico\") on node \"ci-3815.2.4-a-f605b45a38\" DevicePath \"\"" Jun 25 14:54:40.855516 kubelet[2843]: I0625 14:54:40.855507 2843 reconciler_common.go:300] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bdcc6758-d360-4da6-92f3-42cba856bd81-var-lib-calico\") on node \"ci-3815.2.4-a-f605b45a38\" DevicePath \"\"" Jun 25 14:54:40.855516 kubelet[2843]: I0625 14:54:40.855517 2843 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bdcc6758-d360-4da6-92f3-42cba856bd81-lib-modules\") on node \"ci-3815.2.4-a-f605b45a38\" DevicePath \"\"" Jun 25 14:54:40.855860 kubelet[2843]: I0625 14:54:40.855526 2843 reconciler_common.go:300] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/bdcc6758-d360-4da6-92f3-42cba856bd81-policysync\") on node \"ci-3815.2.4-a-f605b45a38\" DevicePath \"\"" Jun 25 14:54:40.855860 kubelet[2843]: I0625 14:54:40.855543 2843 reconciler_common.go:300] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/bdcc6758-d360-4da6-92f3-42cba856bd81-cni-log-dir\") on node \"ci-3815.2.4-a-f605b45a38\" DevicePath \"\"" Jun 25 14:54:40.855860 kubelet[2843]: I0625 14:54:40.855553 2843 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bdcc6758-d360-4da6-92f3-42cba856bd81-xtables-lock\") on node \"ci-3815.2.4-a-f605b45a38\" DevicePath \"\"" Jun 25 14:54:40.855860 kubelet[2843]: I0625 14:54:40.855563 2843 reconciler_common.go:300] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bdcc6758-d360-4da6-92f3-42cba856bd81-tigera-ca-bundle\") on node \"ci-3815.2.4-a-f605b45a38\" DevicePath \"\"" Jun 25 14:54:41.536812 kubelet[2843]: I0625 14:54:41.536747 2843 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="9fecff5b-8d76-44f3-ba53-6bacf502a915" path="/var/lib/kubelet/pods/9fecff5b-8d76-44f3-ba53-6bacf502a915/volumes" Jun 25 14:54:41.541323 systemd[1]: Removed slice kubepods-besteffort-podbdcc6758_d360_4da6_92f3_42cba856bd81.slice - libcontainer container kubepods-besteffort-podbdcc6758_d360_4da6_92f3_42cba856bd81.slice. Jun 25 14:54:41.633133 kubelet[2843]: I0625 14:54:41.633099 2843 scope.go:117] "RemoveContainer" containerID="fd569a3d2fea1a982bd7cea35de128c3a68fd04d5a11727ccaf891a78c783601" Jun 25 14:54:41.636201 containerd[1492]: time="2024-06-25T14:54:41.635798323Z" level=info msg="RemoveContainer for \"fd569a3d2fea1a982bd7cea35de128c3a68fd04d5a11727ccaf891a78c783601\"" Jun 25 14:54:41.646533 containerd[1492]: time="2024-06-25T14:54:41.646447742Z" level=info msg="RemoveContainer for \"fd569a3d2fea1a982bd7cea35de128c3a68fd04d5a11727ccaf891a78c783601\" returns successfully" Jun 25 14:54:41.674683 kubelet[2843]: I0625 14:54:41.674633 2843 topology_manager.go:215] "Topology Admit Handler" podUID="64d51fb9-4e91-4eae-a9ac-a66e93677769" podNamespace="calico-system" podName="calico-node-9lzcr" Jun 25 14:54:41.674854 kubelet[2843]: E0625 14:54:41.674702 2843 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bdcc6758-d360-4da6-92f3-42cba856bd81" containerName="flexvol-driver" Jun 25 14:54:41.674854 kubelet[2843]: I0625 14:54:41.674727 2843 memory_manager.go:346] "RemoveStaleState removing state" podUID="bdcc6758-d360-4da6-92f3-42cba856bd81" containerName="flexvol-driver" Jun 25 14:54:41.679942 systemd[1]: Created slice kubepods-besteffort-pod64d51fb9_4e91_4eae_a9ac_a66e93677769.slice - libcontainer container kubepods-besteffort-pod64d51fb9_4e91_4eae_a9ac_a66e93677769.slice. Jun 25 14:54:41.760905 kubelet[2843]: I0625 14:54:41.760871 2843 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/64d51fb9-4e91-4eae-a9ac-a66e93677769-var-lib-calico\") pod \"calico-node-9lzcr\" (UID: \"64d51fb9-4e91-4eae-a9ac-a66e93677769\") " pod="calico-system/calico-node-9lzcr" Jun 25 14:54:41.760905 kubelet[2843]: I0625 14:54:41.760913 2843 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/64d51fb9-4e91-4eae-a9ac-a66e93677769-flexvol-driver-host\") pod \"calico-node-9lzcr\" (UID: \"64d51fb9-4e91-4eae-a9ac-a66e93677769\") " pod="calico-system/calico-node-9lzcr" Jun 25 14:54:41.761082 kubelet[2843]: I0625 14:54:41.760934 2843 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64d51fb9-4e91-4eae-a9ac-a66e93677769-tigera-ca-bundle\") pod \"calico-node-9lzcr\" (UID: \"64d51fb9-4e91-4eae-a9ac-a66e93677769\") " pod="calico-system/calico-node-9lzcr" Jun 25 14:54:41.761082 kubelet[2843]: I0625 14:54:41.760953 2843 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/64d51fb9-4e91-4eae-a9ac-a66e93677769-cni-bin-dir\") pod \"calico-node-9lzcr\" (UID: \"64d51fb9-4e91-4eae-a9ac-a66e93677769\") " pod="calico-system/calico-node-9lzcr" Jun 25 14:54:41.761082 kubelet[2843]: I0625 14:54:41.760974 2843 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/64d51fb9-4e91-4eae-a9ac-a66e93677769-lib-modules\") pod \"calico-node-9lzcr\" (UID: \"64d51fb9-4e91-4eae-a9ac-a66e93677769\") " pod="calico-system/calico-node-9lzcr" Jun 25 14:54:41.761082 kubelet[2843]: I0625 14:54:41.760992 2843 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/64d51fb9-4e91-4eae-a9ac-a66e93677769-xtables-lock\") pod \"calico-node-9lzcr\" (UID: \"64d51fb9-4e91-4eae-a9ac-a66e93677769\") " pod="calico-system/calico-node-9lzcr" Jun 25 14:54:41.761082 kubelet[2843]: I0625 14:54:41.761021 2843 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/64d51fb9-4e91-4eae-a9ac-a66e93677769-cni-net-dir\") pod \"calico-node-9lzcr\" (UID: \"64d51fb9-4e91-4eae-a9ac-a66e93677769\") " pod="calico-system/calico-node-9lzcr" Jun 25 14:54:41.761276 kubelet[2843]: I0625 14:54:41.761039 2843 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/64d51fb9-4e91-4eae-a9ac-a66e93677769-cni-log-dir\") pod \"calico-node-9lzcr\" (UID: \"64d51fb9-4e91-4eae-a9ac-a66e93677769\") " pod="calico-system/calico-node-9lzcr" Jun 25 14:54:41.761276 kubelet[2843]: I0625 14:54:41.761058 2843 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/64d51fb9-4e91-4eae-a9ac-a66e93677769-policysync\") pod \"calico-node-9lzcr\" (UID: \"64d51fb9-4e91-4eae-a9ac-a66e93677769\") " pod="calico-system/calico-node-9lzcr" Jun 25 14:54:41.761276 kubelet[2843]: I0625 14:54:41.761077 2843 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/64d51fb9-4e91-4eae-a9ac-a66e93677769-var-run-calico\") pod \"calico-node-9lzcr\" (UID: \"64d51fb9-4e91-4eae-a9ac-a66e93677769\") " pod="calico-system/calico-node-9lzcr" Jun 25 14:54:41.761276 kubelet[2843]: I0625 14:54:41.761095 2843 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/64d51fb9-4e91-4eae-a9ac-a66e93677769-node-certs\") pod \"calico-node-9lzcr\" (UID: \"64d51fb9-4e91-4eae-a9ac-a66e93677769\") " pod="calico-system/calico-node-9lzcr" Jun 25 14:54:41.761276 kubelet[2843]: I0625 14:54:41.761116 2843 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5w54\" (UniqueName: \"kubernetes.io/projected/64d51fb9-4e91-4eae-a9ac-a66e93677769-kube-api-access-c5w54\") pod \"calico-node-9lzcr\" (UID: \"64d51fb9-4e91-4eae-a9ac-a66e93677769\") " pod="calico-system/calico-node-9lzcr" Jun 25 14:54:41.985487 containerd[1492]: time="2024-06-25T14:54:41.985320333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9lzcr,Uid:64d51fb9-4e91-4eae-a9ac-a66e93677769,Namespace:calico-system,Attempt:0,}" Jun 25 14:54:42.026037 containerd[1492]: time="2024-06-25T14:54:42.025913875Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:54:42.026242 containerd[1492]: time="2024-06-25T14:54:42.026215561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:54:42.026354 containerd[1492]: time="2024-06-25T14:54:42.026322243Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:54:42.026469 containerd[1492]: time="2024-06-25T14:54:42.026438445Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:54:42.043025 systemd[1]: Started cri-containerd-00f69bc50c72f4f9cb95af41630dd408cb5b3521a86bec7e35e6b69b1861480f.scope - libcontainer container 00f69bc50c72f4f9cb95af41630dd408cb5b3521a86bec7e35e6b69b1861480f. Jun 25 14:54:42.051000 audit: BPF prog-id=165 op=LOAD Jun 25 14:54:42.052000 audit: BPF prog-id=166 op=LOAD Jun 25 14:54:42.052000 audit[3721]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001318b0 a2=78 a3=0 items=0 ppid=3712 pid=3721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:42.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030663639626335306337326634663963623935616634313633306464 Jun 25 14:54:42.052000 audit: BPF prog-id=167 op=LOAD Jun 25 14:54:42.052000 audit[3721]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000131640 a2=78 a3=0 items=0 ppid=3712 pid=3721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:42.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030663639626335306337326634663963623935616634313633306464 Jun 25 14:54:42.052000 audit: BPF prog-id=167 op=UNLOAD Jun 25 14:54:42.052000 audit: BPF prog-id=166 op=UNLOAD Jun 25 14:54:42.052000 audit: BPF prog-id=168 op=LOAD Jun 25 14:54:42.052000 audit[3721]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000131b10 a2=78 a3=0 items=0 ppid=3712 pid=3721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:42.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030663639626335306337326634663963623935616634313633306464 Jun 25 14:54:42.068584 containerd[1492]: time="2024-06-25T14:54:42.068423288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9lzcr,Uid:64d51fb9-4e91-4eae-a9ac-a66e93677769,Namespace:calico-system,Attempt:0,} returns sandbox id \"00f69bc50c72f4f9cb95af41630dd408cb5b3521a86bec7e35e6b69b1861480f\"" Jun 25 14:54:42.072901 containerd[1492]: time="2024-06-25T14:54:42.072668733Z" level=info msg="CreateContainer within sandbox \"00f69bc50c72f4f9cb95af41630dd408cb5b3521a86bec7e35e6b69b1861480f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 25 14:54:42.110990 containerd[1492]: time="2024-06-25T14:54:42.110930541Z" level=info msg="CreateContainer within sandbox \"00f69bc50c72f4f9cb95af41630dd408cb5b3521a86bec7e35e6b69b1861480f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a4f9e29a8451a978c740295493dd81757e528285dccade7b1d923f8a5346b4e6\"" Jun 25 14:54:42.113208 containerd[1492]: time="2024-06-25T14:54:42.112575735Z" level=info msg="StartContainer for \"a4f9e29a8451a978c740295493dd81757e528285dccade7b1d923f8a5346b4e6\"" Jun 25 14:54:42.134963 systemd[1]: Started cri-containerd-a4f9e29a8451a978c740295493dd81757e528285dccade7b1d923f8a5346b4e6.scope - libcontainer container a4f9e29a8451a978c740295493dd81757e528285dccade7b1d923f8a5346b4e6. Jun 25 14:54:42.146000 audit: BPF prog-id=169 op=LOAD Jun 25 14:54:42.146000 audit[3753]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=400018d8b0 a2=78 a3=0 items=0 ppid=3712 pid=3753 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:42.146000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134663965323961383435316139373863373430323935343933646438 Jun 25 14:54:42.146000 audit: BPF prog-id=170 op=LOAD Jun 25 14:54:42.146000 audit[3753]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=400018d640 a2=78 a3=0 items=0 ppid=3712 pid=3753 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:42.146000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134663965323961383435316139373863373430323935343933646438 Jun 25 14:54:42.146000 audit: BPF prog-id=170 op=UNLOAD Jun 25 14:54:42.146000 audit: BPF prog-id=169 op=UNLOAD Jun 25 14:54:42.146000 audit: BPF prog-id=171 op=LOAD Jun 25 14:54:42.146000 audit[3753]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=400018db10 a2=78 a3=0 items=0 ppid=3712 pid=3753 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:42.146000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134663965323961383435316139373863373430323935343933646438 Jun 25 14:54:42.165527 containerd[1492]: time="2024-06-25T14:54:42.165465836Z" level=info msg="StartContainer for \"a4f9e29a8451a978c740295493dd81757e528285dccade7b1d923f8a5346b4e6\" returns successfully" Jun 25 14:54:42.173597 systemd[1]: cri-containerd-a4f9e29a8451a978c740295493dd81757e528285dccade7b1d923f8a5346b4e6.scope: Deactivated successfully. Jun 25 14:54:42.176000 audit: BPF prog-id=171 op=UNLOAD Jun 25 14:54:42.276665 containerd[1492]: time="2024-06-25T14:54:42.276614907Z" level=info msg="shim disconnected" id=a4f9e29a8451a978c740295493dd81757e528285dccade7b1d923f8a5346b4e6 namespace=k8s.io Jun 25 14:54:42.277006 containerd[1492]: time="2024-06-25T14:54:42.276982195Z" level=warning msg="cleaning up after shim disconnected" id=a4f9e29a8451a978c740295493dd81757e528285dccade7b1d923f8a5346b4e6 namespace=k8s.io Jun 25 14:54:42.277104 containerd[1492]: time="2024-06-25T14:54:42.277088917Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 14:54:42.534609 kubelet[2843]: E0625 14:54:42.534488 2843 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4gqsk" podUID="abc80f9d-37c5-4a3d-984d-c970bd8ec106" Jun 25 14:54:42.637682 containerd[1492]: time="2024-06-25T14:54:42.637644315Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jun 25 14:54:43.537522 kubelet[2843]: I0625 14:54:43.537215 2843 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="bdcc6758-d360-4da6-92f3-42cba856bd81" path="/var/lib/kubelet/pods/bdcc6758-d360-4da6-92f3-42cba856bd81/volumes" Jun 25 14:54:44.534272 kubelet[2843]: E0625 14:54:44.534240 2843 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4gqsk" podUID="abc80f9d-37c5-4a3d-984d-c970bd8ec106" Jun 25 14:54:45.886852 containerd[1492]: time="2024-06-25T14:54:45.886773858Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:54:45.888747 containerd[1492]: time="2024-06-25T14:54:45.888698735Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=86799715" Jun 25 14:54:45.893320 containerd[1492]: time="2024-06-25T14:54:45.893292821Z" level=info msg="ImageCreate event name:\"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:54:45.896269 containerd[1492]: time="2024-06-25T14:54:45.896243717Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:54:45.899968 containerd[1492]: time="2024-06-25T14:54:45.899927906Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:54:45.900711 containerd[1492]: time="2024-06-25T14:54:45.900680880Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"88166283\" in 3.262019825s" Jun 25 14:54:45.900843 containerd[1492]: time="2024-06-25T14:54:45.900822363Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\"" Jun 25 14:54:45.902934 containerd[1492]: time="2024-06-25T14:54:45.902860841Z" level=info msg="CreateContainer within sandbox \"00f69bc50c72f4f9cb95af41630dd408cb5b3521a86bec7e35e6b69b1861480f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jun 25 14:54:45.936452 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1102725549.mount: Deactivated successfully. Jun 25 14:54:45.951321 containerd[1492]: time="2024-06-25T14:54:45.951243233Z" level=info msg="CreateContainer within sandbox \"00f69bc50c72f4f9cb95af41630dd408cb5b3521a86bec7e35e6b69b1861480f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"76c736356d8a848d10103ffd4fe2290cc05d2bcd5f9b22957808875238cf126a\"" Jun 25 14:54:45.952217 containerd[1492]: time="2024-06-25T14:54:45.952192331Z" level=info msg="StartContainer for \"76c736356d8a848d10103ffd4fe2290cc05d2bcd5f9b22957808875238cf126a\"" Jun 25 14:54:45.979969 systemd[1]: Started cri-containerd-76c736356d8a848d10103ffd4fe2290cc05d2bcd5f9b22957808875238cf126a.scope - libcontainer container 76c736356d8a848d10103ffd4fe2290cc05d2bcd5f9b22957808875238cf126a. Jun 25 14:54:45.995836 kernel: kauditd_printk_skb: 58 callbacks suppressed Jun 25 14:54:45.995976 kernel: audit: type=1334 audit(1719327285.990:530): prog-id=172 op=LOAD Jun 25 14:54:45.990000 audit: BPF prog-id=172 op=LOAD Jun 25 14:54:45.990000 audit[3827]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=400018d8b0 a2=78 a3=0 items=0 ppid=3712 pid=3827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:46.021955 kernel: audit: type=1300 audit(1719327285.990:530): arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=400018d8b0 a2=78 a3=0 items=0 ppid=3712 pid=3827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:45.990000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736633733363335366438613834386431303130336666643466653232 Jun 25 14:54:46.043113 kernel: audit: type=1327 audit(1719327285.990:530): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736633733363335366438613834386431303130336666643466653232 Jun 25 14:54:45.990000 audit: BPF prog-id=173 op=LOAD Jun 25 14:54:46.049549 kernel: audit: type=1334 audit(1719327285.990:531): prog-id=173 op=LOAD Jun 25 14:54:45.990000 audit[3827]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=400018d640 a2=78 a3=0 items=0 ppid=3712 pid=3827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:46.071599 kernel: audit: type=1300 audit(1719327285.990:531): arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=400018d640 a2=78 a3=0 items=0 ppid=3712 pid=3827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:45.990000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736633733363335366438613834386431303130336666643466653232 Jun 25 14:54:46.079983 containerd[1492]: time="2024-06-25T14:54:46.079943546Z" level=info msg="StartContainer for \"76c736356d8a848d10103ffd4fe2290cc05d2bcd5f9b22957808875238cf126a\" returns successfully" Jun 25 14:54:46.094305 kernel: audit: type=1327 audit(1719327285.990:531): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736633733363335366438613834386431303130336666643466653232 Jun 25 14:54:45.994000 audit: BPF prog-id=173 op=UNLOAD Jun 25 14:54:46.101278 kernel: audit: type=1334 audit(1719327285.994:532): prog-id=173 op=UNLOAD Jun 25 14:54:45.994000 audit: BPF prog-id=172 op=UNLOAD Jun 25 14:54:46.107201 kernel: audit: type=1334 audit(1719327285.994:533): prog-id=172 op=UNLOAD Jun 25 14:54:45.994000 audit: BPF prog-id=174 op=LOAD Jun 25 14:54:46.112345 kernel: audit: type=1334 audit(1719327285.994:534): prog-id=174 op=LOAD Jun 25 14:54:45.994000 audit[3827]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=400018db10 a2=78 a3=0 items=0 ppid=3712 pid=3827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:46.133713 kernel: audit: type=1300 audit(1719327285.994:534): arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=400018db10 a2=78 a3=0 items=0 ppid=3712 pid=3827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:45.994000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736633733363335366438613834386431303130336666643466653232 Jun 25 14:54:46.534848 kubelet[2843]: E0625 14:54:46.534807 2843 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4gqsk" podUID="abc80f9d-37c5-4a3d-984d-c970bd8ec106" Jun 25 14:54:46.933244 systemd[1]: run-containerd-runc-k8s.io-76c736356d8a848d10103ffd4fe2290cc05d2bcd5f9b22957808875238cf126a-runc.uK1nhD.mount: Deactivated successfully. Jun 25 14:54:48.042079 kubelet[2843]: I0625 14:54:48.042043 2843 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 14:54:48.068000 audit[3855]: NETFILTER_CFG table=filter:104 family=2 entries=15 op=nft_register_rule pid=3855 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:54:48.068000 audit[3855]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5164 a0=3 a1=ffffda55da30 a2=0 a3=1 items=0 ppid=3025 pid=3855 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:48.068000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:54:48.069000 audit[3855]: NETFILTER_CFG table=nat:105 family=2 entries=19 op=nft_register_chain pid=3855 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:54:48.069000 audit[3855]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffda55da30 a2=0 a3=1 items=0 ppid=3025 pid=3855 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:48.069000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:54:48.777608 kubelet[2843]: E0625 14:54:48.533961 2843 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4gqsk" podUID="abc80f9d-37c5-4a3d-984d-c970bd8ec106" Jun 25 14:54:49.058402 containerd[1492]: time="2024-06-25T14:54:49.058282602Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 14:54:49.060891 systemd[1]: cri-containerd-76c736356d8a848d10103ffd4fe2290cc05d2bcd5f9b22957808875238cf126a.scope: Deactivated successfully. Jun 25 14:54:49.064000 audit: BPF prog-id=174 op=UNLOAD Jun 25 14:54:49.080160 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-76c736356d8a848d10103ffd4fe2290cc05d2bcd5f9b22957808875238cf126a-rootfs.mount: Deactivated successfully. Jun 25 14:54:49.149179 kubelet[2843]: I0625 14:54:49.149148 2843 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Jun 25 14:54:49.422205 kubelet[2843]: I0625 14:54:49.168007 2843 topology_manager.go:215] "Topology Admit Handler" podUID="9e6a8668-a98b-4401-b848-2bc30cd2cac6" podNamespace="kube-system" podName="coredns-5dd5756b68-qpwfn" Jun 25 14:54:49.422205 kubelet[2843]: I0625 14:54:49.175460 2843 topology_manager.go:215] "Topology Admit Handler" podUID="b14e02c4-5bcb-42cf-ac77-040a296222aa" podNamespace="calico-system" podName="calico-kube-controllers-564f6c74f7-tqbql" Jun 25 14:54:49.422205 kubelet[2843]: I0625 14:54:49.178857 2843 topology_manager.go:215] "Topology Admit Handler" podUID="0989afc6-f75f-4830-b87f-2ccfc1afc269" podNamespace="kube-system" podName="coredns-5dd5756b68-rjfmh" Jun 25 14:54:49.422205 kubelet[2843]: I0625 14:54:49.307856 2843 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e6a8668-a98b-4401-b848-2bc30cd2cac6-config-volume\") pod \"coredns-5dd5756b68-qpwfn\" (UID: \"9e6a8668-a98b-4401-b848-2bc30cd2cac6\") " pod="kube-system/coredns-5dd5756b68-qpwfn" Jun 25 14:54:49.422205 kubelet[2843]: I0625 14:54:49.307964 2843 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkcnq\" (UniqueName: \"kubernetes.io/projected/9e6a8668-a98b-4401-b848-2bc30cd2cac6-kube-api-access-kkcnq\") pod \"coredns-5dd5756b68-qpwfn\" (UID: \"9e6a8668-a98b-4401-b848-2bc30cd2cac6\") " pod="kube-system/coredns-5dd5756b68-qpwfn" Jun 25 14:54:49.422205 kubelet[2843]: I0625 14:54:49.307988 2843 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0989afc6-f75f-4830-b87f-2ccfc1afc269-config-volume\") pod \"coredns-5dd5756b68-rjfmh\" (UID: \"0989afc6-f75f-4830-b87f-2ccfc1afc269\") " pod="kube-system/coredns-5dd5756b68-rjfmh" Jun 25 14:54:49.173030 systemd[1]: Created slice kubepods-burstable-pod9e6a8668_a98b_4401_b848_2bc30cd2cac6.slice - libcontainer container kubepods-burstable-pod9e6a8668_a98b_4401_b848_2bc30cd2cac6.slice. Jun 25 14:54:49.422552 kubelet[2843]: I0625 14:54:49.308046 2843 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b14e02c4-5bcb-42cf-ac77-040a296222aa-tigera-ca-bundle\") pod \"calico-kube-controllers-564f6c74f7-tqbql\" (UID: \"b14e02c4-5bcb-42cf-ac77-040a296222aa\") " pod="calico-system/calico-kube-controllers-564f6c74f7-tqbql" Jun 25 14:54:49.422552 kubelet[2843]: I0625 14:54:49.308077 2843 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fqvw\" (UniqueName: \"kubernetes.io/projected/b14e02c4-5bcb-42cf-ac77-040a296222aa-kube-api-access-6fqvw\") pod \"calico-kube-controllers-564f6c74f7-tqbql\" (UID: \"b14e02c4-5bcb-42cf-ac77-040a296222aa\") " pod="calico-system/calico-kube-controllers-564f6c74f7-tqbql" Jun 25 14:54:49.422552 kubelet[2843]: I0625 14:54:49.308130 2843 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr96f\" (UniqueName: \"kubernetes.io/projected/0989afc6-f75f-4830-b87f-2ccfc1afc269-kube-api-access-dr96f\") pod \"coredns-5dd5756b68-rjfmh\" (UID: \"0989afc6-f75f-4830-b87f-2ccfc1afc269\") " pod="kube-system/coredns-5dd5756b68-rjfmh" Jun 25 14:54:49.182117 systemd[1]: Created slice kubepods-besteffort-podb14e02c4_5bcb_42cf_ac77_040a296222aa.slice - libcontainer container kubepods-besteffort-podb14e02c4_5bcb_42cf_ac77_040a296222aa.slice. Jun 25 14:54:49.186575 systemd[1]: Created slice kubepods-burstable-pod0989afc6_f75f_4830_b87f_2ccfc1afc269.slice - libcontainer container kubepods-burstable-pod0989afc6_f75f_4830_b87f_2ccfc1afc269.slice. Jun 25 14:54:49.723384 containerd[1492]: time="2024-06-25T14:54:49.723167265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-qpwfn,Uid:9e6a8668-a98b-4401-b848-2bc30cd2cac6,Namespace:kube-system,Attempt:0,}" Jun 25 14:54:49.726275 containerd[1492]: time="2024-06-25T14:54:49.726219278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-rjfmh,Uid:0989afc6-f75f-4830-b87f-2ccfc1afc269,Namespace:kube-system,Attempt:0,}" Jun 25 14:54:49.726600 containerd[1492]: time="2024-06-25T14:54:49.726567124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-564f6c74f7-tqbql,Uid:b14e02c4-5bcb-42cf-ac77-040a296222aa,Namespace:calico-system,Attempt:0,}" Jun 25 14:54:50.282195 containerd[1492]: time="2024-06-25T14:54:50.282134672Z" level=info msg="shim disconnected" id=76c736356d8a848d10103ffd4fe2290cc05d2bcd5f9b22957808875238cf126a namespace=k8s.io Jun 25 14:54:50.282195 containerd[1492]: time="2024-06-25T14:54:50.282186433Z" level=warning msg="cleaning up after shim disconnected" id=76c736356d8a848d10103ffd4fe2290cc05d2bcd5f9b22957808875238cf126a namespace=k8s.io Jun 25 14:54:50.282195 containerd[1492]: time="2024-06-25T14:54:50.282196754Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 14:54:50.397696 containerd[1492]: time="2024-06-25T14:54:50.397608798Z" level=error msg="Failed to destroy network for sandbox \"0c57418be5c2e7fd0275ef2e8991ff70718ee7a00a8e8c5f36033d0092c61ef5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:54:50.398298 containerd[1492]: time="2024-06-25T14:54:50.398243249Z" level=error msg="encountered an error cleaning up failed sandbox \"0c57418be5c2e7fd0275ef2e8991ff70718ee7a00a8e8c5f36033d0092c61ef5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:54:50.398368 containerd[1492]: time="2024-06-25T14:54:50.398308850Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-564f6c74f7-tqbql,Uid:b14e02c4-5bcb-42cf-ac77-040a296222aa,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0c57418be5c2e7fd0275ef2e8991ff70718ee7a00a8e8c5f36033d0092c61ef5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:54:50.398563 kubelet[2843]: E0625 14:54:50.398535 2843 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c57418be5c2e7fd0275ef2e8991ff70718ee7a00a8e8c5f36033d0092c61ef5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:54:50.398880 kubelet[2843]: E0625 14:54:50.398594 2843 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c57418be5c2e7fd0275ef2e8991ff70718ee7a00a8e8c5f36033d0092c61ef5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-564f6c74f7-tqbql" Jun 25 14:54:50.398880 kubelet[2843]: E0625 14:54:50.398616 2843 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c57418be5c2e7fd0275ef2e8991ff70718ee7a00a8e8c5f36033d0092c61ef5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-564f6c74f7-tqbql" Jun 25 14:54:50.398880 kubelet[2843]: E0625 14:54:50.398669 2843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-564f6c74f7-tqbql_calico-system(b14e02c4-5bcb-42cf-ac77-040a296222aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-564f6c74f7-tqbql_calico-system(b14e02c4-5bcb-42cf-ac77-040a296222aa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0c57418be5c2e7fd0275ef2e8991ff70718ee7a00a8e8c5f36033d0092c61ef5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-564f6c74f7-tqbql" podUID="b14e02c4-5bcb-42cf-ac77-040a296222aa" Jun 25 14:54:50.425871 containerd[1492]: time="2024-06-25T14:54:50.425776717Z" level=error msg="Failed to destroy network for sandbox \"51bd1b9328699dcf9216804ea054d5a23e55aad230c965553508486924fe18c2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:54:50.426196 containerd[1492]: time="2024-06-25T14:54:50.426156924Z" level=error msg="encountered an error cleaning up failed sandbox \"51bd1b9328699dcf9216804ea054d5a23e55aad230c965553508486924fe18c2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:54:50.426243 containerd[1492]: time="2024-06-25T14:54:50.426218285Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-qpwfn,Uid:9e6a8668-a98b-4401-b848-2bc30cd2cac6,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"51bd1b9328699dcf9216804ea054d5a23e55aad230c965553508486924fe18c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:54:50.426826 kubelet[2843]: E0625 14:54:50.426433 2843 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51bd1b9328699dcf9216804ea054d5a23e55aad230c965553508486924fe18c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:54:50.426826 kubelet[2843]: E0625 14:54:50.426487 2843 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51bd1b9328699dcf9216804ea054d5a23e55aad230c965553508486924fe18c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-qpwfn" Jun 25 14:54:50.426826 kubelet[2843]: E0625 14:54:50.426507 2843 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51bd1b9328699dcf9216804ea054d5a23e55aad230c965553508486924fe18c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-qpwfn" Jun 25 14:54:50.426970 kubelet[2843]: E0625 14:54:50.426556 2843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-qpwfn_kube-system(9e6a8668-a98b-4401-b848-2bc30cd2cac6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-qpwfn_kube-system(9e6a8668-a98b-4401-b848-2bc30cd2cac6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"51bd1b9328699dcf9216804ea054d5a23e55aad230c965553508486924fe18c2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-qpwfn" podUID="9e6a8668-a98b-4401-b848-2bc30cd2cac6" Jun 25 14:54:50.443531 containerd[1492]: time="2024-06-25T14:54:50.443477419Z" level=error msg="Failed to destroy network for sandbox \"24e620276a7978c883268a4c1afcf43a204a57123fa0cddeaaa65c4657152f7b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:54:50.444018 containerd[1492]: time="2024-06-25T14:54:50.443983987Z" level=error msg="encountered an error cleaning up failed sandbox \"24e620276a7978c883268a4c1afcf43a204a57123fa0cddeaaa65c4657152f7b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:54:50.444149 containerd[1492]: time="2024-06-25T14:54:50.444120869Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-rjfmh,Uid:0989afc6-f75f-4830-b87f-2ccfc1afc269,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"24e620276a7978c883268a4c1afcf43a204a57123fa0cddeaaa65c4657152f7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:54:50.444583 kubelet[2843]: E0625 14:54:50.444425 2843 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24e620276a7978c883268a4c1afcf43a204a57123fa0cddeaaa65c4657152f7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:54:50.444583 kubelet[2843]: E0625 14:54:50.444474 2843 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24e620276a7978c883268a4c1afcf43a204a57123fa0cddeaaa65c4657152f7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-rjfmh" Jun 25 14:54:50.444583 kubelet[2843]: E0625 14:54:50.444498 2843 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24e620276a7978c883268a4c1afcf43a204a57123fa0cddeaaa65c4657152f7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-rjfmh" Jun 25 14:54:50.444736 kubelet[2843]: E0625 14:54:50.444555 2843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-rjfmh_kube-system(0989afc6-f75f-4830-b87f-2ccfc1afc269)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-rjfmh_kube-system(0989afc6-f75f-4830-b87f-2ccfc1afc269)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"24e620276a7978c883268a4c1afcf43a204a57123fa0cddeaaa65c4657152f7b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-rjfmh" podUID="0989afc6-f75f-4830-b87f-2ccfc1afc269" Jun 25 14:54:50.538935 systemd[1]: Created slice kubepods-besteffort-podabc80f9d_37c5_4a3d_984d_c970bd8ec106.slice - libcontainer container kubepods-besteffort-podabc80f9d_37c5_4a3d_984d_c970bd8ec106.slice. Jun 25 14:54:50.541934 containerd[1492]: time="2024-06-25T14:54:50.541896334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4gqsk,Uid:abc80f9d-37c5-4a3d-984d-c970bd8ec106,Namespace:calico-system,Attempt:0,}" Jun 25 14:54:50.617661 containerd[1492]: time="2024-06-25T14:54:50.617600502Z" level=error msg="Failed to destroy network for sandbox \"2395c577f603e4ce910c9aa27b3eca77eb862719d068cb58779890d3973e5ca3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:54:50.618081 containerd[1492]: time="2024-06-25T14:54:50.618013989Z" level=error msg="encountered an error cleaning up failed sandbox \"2395c577f603e4ce910c9aa27b3eca77eb862719d068cb58779890d3973e5ca3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:54:50.618132 containerd[1492]: time="2024-06-25T14:54:50.618103111Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4gqsk,Uid:abc80f9d-37c5-4a3d-984d-c970bd8ec106,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2395c577f603e4ce910c9aa27b3eca77eb862719d068cb58779890d3973e5ca3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:54:50.618377 kubelet[2843]: E0625 14:54:50.618347 2843 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2395c577f603e4ce910c9aa27b3eca77eb862719d068cb58779890d3973e5ca3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:54:50.618434 kubelet[2843]: E0625 14:54:50.618411 2843 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2395c577f603e4ce910c9aa27b3eca77eb862719d068cb58779890d3973e5ca3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4gqsk" Jun 25 14:54:50.618464 kubelet[2843]: E0625 14:54:50.618438 2843 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2395c577f603e4ce910c9aa27b3eca77eb862719d068cb58779890d3973e5ca3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4gqsk" Jun 25 14:54:50.618518 kubelet[2843]: E0625 14:54:50.618499 2843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4gqsk_calico-system(abc80f9d-37c5-4a3d-984d-c970bd8ec106)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4gqsk_calico-system(abc80f9d-37c5-4a3d-984d-c970bd8ec106)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2395c577f603e4ce910c9aa27b3eca77eb862719d068cb58779890d3973e5ca3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4gqsk" podUID="abc80f9d-37c5-4a3d-984d-c970bd8ec106" Jun 25 14:54:50.657224 kubelet[2843]: I0625 14:54:50.657127 2843 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c57418be5c2e7fd0275ef2e8991ff70718ee7a00a8e8c5f36033d0092c61ef5" Jun 25 14:54:50.657973 containerd[1492]: time="2024-06-25T14:54:50.657916268Z" level=info msg="StopPodSandbox for \"0c57418be5c2e7fd0275ef2e8991ff70718ee7a00a8e8c5f36033d0092c61ef5\"" Jun 25 14:54:50.660667 containerd[1492]: time="2024-06-25T14:54:50.660633954Z" level=info msg="Ensure that sandbox 0c57418be5c2e7fd0275ef2e8991ff70718ee7a00a8e8c5f36033d0092c61ef5 in task-service has been cleanup successfully" Jun 25 14:54:50.661876 kubelet[2843]: I0625 14:54:50.661846 2843 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24e620276a7978c883268a4c1afcf43a204a57123fa0cddeaaa65c4657152f7b" Jun 25 14:54:50.663284 containerd[1492]: time="2024-06-25T14:54:50.663248999Z" level=info msg="StopPodSandbox for \"24e620276a7978c883268a4c1afcf43a204a57123fa0cddeaaa65c4657152f7b\"" Jun 25 14:54:50.664292 containerd[1492]: time="2024-06-25T14:54:50.664246136Z" level=info msg="Ensure that sandbox 24e620276a7978c883268a4c1afcf43a204a57123fa0cddeaaa65c4657152f7b in task-service has been cleanup successfully" Jun 25 14:54:50.669512 kubelet[2843]: I0625 14:54:50.669481 2843 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2395c577f603e4ce910c9aa27b3eca77eb862719d068cb58779890d3973e5ca3" Jun 25 14:54:50.671621 containerd[1492]: time="2024-06-25T14:54:50.671566941Z" level=info msg="StopPodSandbox for \"2395c577f603e4ce910c9aa27b3eca77eb862719d068cb58779890d3973e5ca3\"" Jun 25 14:54:50.673024 containerd[1492]: time="2024-06-25T14:54:50.672997565Z" level=info msg="Ensure that sandbox 2395c577f603e4ce910c9aa27b3eca77eb862719d068cb58779890d3973e5ca3 in task-service has been cleanup successfully" Jun 25 14:54:50.677636 containerd[1492]: time="2024-06-25T14:54:50.676998553Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jun 25 14:54:50.683833 kubelet[2843]: I0625 14:54:50.682591 2843 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51bd1b9328699dcf9216804ea054d5a23e55aad230c965553508486924fe18c2" Jun 25 14:54:50.686825 containerd[1492]: time="2024-06-25T14:54:50.685585979Z" level=info msg="StopPodSandbox for \"51bd1b9328699dcf9216804ea054d5a23e55aad230c965553508486924fe18c2\"" Jun 25 14:54:50.686825 containerd[1492]: time="2024-06-25T14:54:50.685768222Z" level=info msg="Ensure that sandbox 51bd1b9328699dcf9216804ea054d5a23e55aad230c965553508486924fe18c2 in task-service has been cleanup successfully" Jun 25 14:54:50.724082 containerd[1492]: time="2024-06-25T14:54:50.723969272Z" level=error msg="StopPodSandbox for \"24e620276a7978c883268a4c1afcf43a204a57123fa0cddeaaa65c4657152f7b\" failed" error="failed to destroy network for sandbox \"24e620276a7978c883268a4c1afcf43a204a57123fa0cddeaaa65c4657152f7b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:54:50.724456 kubelet[2843]: E0625 14:54:50.724427 2843 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"24e620276a7978c883268a4c1afcf43a204a57123fa0cddeaaa65c4657152f7b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="24e620276a7978c883268a4c1afcf43a204a57123fa0cddeaaa65c4657152f7b" Jun 25 14:54:50.724534 kubelet[2843]: E0625 14:54:50.724487 2843 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"24e620276a7978c883268a4c1afcf43a204a57123fa0cddeaaa65c4657152f7b"} Jun 25 14:54:50.724534 kubelet[2843]: E0625 14:54:50.724522 2843 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0989afc6-f75f-4830-b87f-2ccfc1afc269\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"24e620276a7978c883268a4c1afcf43a204a57123fa0cddeaaa65c4657152f7b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 14:54:50.724625 kubelet[2843]: E0625 14:54:50.724560 2843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0989afc6-f75f-4830-b87f-2ccfc1afc269\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"24e620276a7978c883268a4c1afcf43a204a57123fa0cddeaaa65c4657152f7b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-rjfmh" podUID="0989afc6-f75f-4830-b87f-2ccfc1afc269" Jun 25 14:54:50.732928 containerd[1492]: time="2024-06-25T14:54:50.732871784Z" level=error msg="StopPodSandbox for \"2395c577f603e4ce910c9aa27b3eca77eb862719d068cb58779890d3973e5ca3\" failed" error="failed to destroy network for sandbox \"2395c577f603e4ce910c9aa27b3eca77eb862719d068cb58779890d3973e5ca3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:54:50.733323 kubelet[2843]: E0625 14:54:50.733281 2843 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2395c577f603e4ce910c9aa27b3eca77eb862719d068cb58779890d3973e5ca3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2395c577f603e4ce910c9aa27b3eca77eb862719d068cb58779890d3973e5ca3" Jun 25 14:54:50.733413 kubelet[2843]: E0625 14:54:50.733339 2843 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2395c577f603e4ce910c9aa27b3eca77eb862719d068cb58779890d3973e5ca3"} Jun 25 14:54:50.733413 kubelet[2843]: E0625 14:54:50.733372 2843 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"abc80f9d-37c5-4a3d-984d-c970bd8ec106\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2395c577f603e4ce910c9aa27b3eca77eb862719d068cb58779890d3973e5ca3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 14:54:50.733495 kubelet[2843]: E0625 14:54:50.733414 2843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"abc80f9d-37c5-4a3d-984d-c970bd8ec106\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2395c577f603e4ce910c9aa27b3eca77eb862719d068cb58779890d3973e5ca3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4gqsk" podUID="abc80f9d-37c5-4a3d-984d-c970bd8ec106" Jun 25 14:54:50.744188 containerd[1492]: time="2024-06-25T14:54:50.744124015Z" level=error msg="StopPodSandbox for \"0c57418be5c2e7fd0275ef2e8991ff70718ee7a00a8e8c5f36033d0092c61ef5\" failed" error="failed to destroy network for sandbox \"0c57418be5c2e7fd0275ef2e8991ff70718ee7a00a8e8c5f36033d0092c61ef5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:54:50.744529 kubelet[2843]: E0625 14:54:50.744492 2843 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0c57418be5c2e7fd0275ef2e8991ff70718ee7a00a8e8c5f36033d0092c61ef5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0c57418be5c2e7fd0275ef2e8991ff70718ee7a00a8e8c5f36033d0092c61ef5" Jun 25 14:54:50.744599 kubelet[2843]: E0625 14:54:50.744555 2843 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0c57418be5c2e7fd0275ef2e8991ff70718ee7a00a8e8c5f36033d0092c61ef5"} Jun 25 14:54:50.744599 kubelet[2843]: E0625 14:54:50.744592 2843 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b14e02c4-5bcb-42cf-ac77-040a296222aa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0c57418be5c2e7fd0275ef2e8991ff70718ee7a00a8e8c5f36033d0092c61ef5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 14:54:50.744690 kubelet[2843]: E0625 14:54:50.744636 2843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b14e02c4-5bcb-42cf-ac77-040a296222aa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0c57418be5c2e7fd0275ef2e8991ff70718ee7a00a8e8c5f36033d0092c61ef5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-564f6c74f7-tqbql" podUID="b14e02c4-5bcb-42cf-ac77-040a296222aa" Jun 25 14:54:50.752357 containerd[1492]: time="2024-06-25T14:54:50.752297235Z" level=error msg="StopPodSandbox for \"51bd1b9328699dcf9216804ea054d5a23e55aad230c965553508486924fe18c2\" failed" error="failed to destroy network for sandbox \"51bd1b9328699dcf9216804ea054d5a23e55aad230c965553508486924fe18c2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:54:50.752622 kubelet[2843]: E0625 14:54:50.752587 2843 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"51bd1b9328699dcf9216804ea054d5a23e55aad230c965553508486924fe18c2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="51bd1b9328699dcf9216804ea054d5a23e55aad230c965553508486924fe18c2" Jun 25 14:54:50.752694 kubelet[2843]: E0625 14:54:50.752631 2843 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"51bd1b9328699dcf9216804ea054d5a23e55aad230c965553508486924fe18c2"} Jun 25 14:54:50.752694 kubelet[2843]: E0625 14:54:50.752669 2843 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9e6a8668-a98b-4401-b848-2bc30cd2cac6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"51bd1b9328699dcf9216804ea054d5a23e55aad230c965553508486924fe18c2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 14:54:50.752800 kubelet[2843]: E0625 14:54:50.752695 2843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9e6a8668-a98b-4401-b848-2bc30cd2cac6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"51bd1b9328699dcf9216804ea054d5a23e55aad230c965553508486924fe18c2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-qpwfn" podUID="9e6a8668-a98b-4401-b848-2bc30cd2cac6" Jun 25 14:54:51.314609 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-24e620276a7978c883268a4c1afcf43a204a57123fa0cddeaaa65c4657152f7b-shm.mount: Deactivated successfully. Jun 25 14:54:51.314705 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-51bd1b9328699dcf9216804ea054d5a23e55aad230c965553508486924fe18c2-shm.mount: Deactivated successfully. Jun 25 14:54:51.314759 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0c57418be5c2e7fd0275ef2e8991ff70718ee7a00a8e8c5f36033d0092c61ef5-shm.mount: Deactivated successfully. Jun 25 14:54:55.266804 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3008630995.mount: Deactivated successfully. Jun 25 14:54:56.668917 containerd[1492]: time="2024-06-25T14:54:56.668862360Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:54:56.670902 containerd[1492]: time="2024-06-25T14:54:56.670652908Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=110491350" Jun 25 14:54:56.674776 containerd[1492]: time="2024-06-25T14:54:56.674698889Z" level=info msg="ImageCreate event name:\"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:54:56.677722 containerd[1492]: time="2024-06-25T14:54:56.677688734Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:54:56.682716 containerd[1492]: time="2024-06-25T14:54:56.682669050Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:54:56.683313 containerd[1492]: time="2024-06-25T14:54:56.683266379Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"110491212\" in 6.006221945s" Jun 25 14:54:56.683313 containerd[1492]: time="2024-06-25T14:54:56.683307580Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\"" Jun 25 14:54:56.697764 containerd[1492]: time="2024-06-25T14:54:56.697706439Z" level=info msg="CreateContainer within sandbox \"00f69bc50c72f4f9cb95af41630dd408cb5b3521a86bec7e35e6b69b1861480f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jun 25 14:54:56.725461 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount564844256.mount: Deactivated successfully. Jun 25 14:54:56.736183 containerd[1492]: time="2024-06-25T14:54:56.736120742Z" level=info msg="CreateContainer within sandbox \"00f69bc50c72f4f9cb95af41630dd408cb5b3521a86bec7e35e6b69b1861480f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b0b8c45a1f1f6b9537bf08d95bea92f489c63eb0614a88497520d6fc93eb04e9\"" Jun 25 14:54:56.738168 containerd[1492]: time="2024-06-25T14:54:56.736905394Z" level=info msg="StartContainer for \"b0b8c45a1f1f6b9537bf08d95bea92f489c63eb0614a88497520d6fc93eb04e9\"" Jun 25 14:54:56.765014 systemd[1]: Started cri-containerd-b0b8c45a1f1f6b9537bf08d95bea92f489c63eb0614a88497520d6fc93eb04e9.scope - libcontainer container b0b8c45a1f1f6b9537bf08d95bea92f489c63eb0614a88497520d6fc93eb04e9. Jun 25 14:54:56.783033 kernel: kauditd_printk_skb: 8 callbacks suppressed Jun 25 14:54:56.783157 kernel: audit: type=1334 audit(1719327296.777:538): prog-id=175 op=LOAD Jun 25 14:54:56.777000 audit: BPF prog-id=175 op=LOAD Jun 25 14:54:56.777000 audit[4119]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=400018d8b0 a2=78 a3=0 items=0 ppid=3712 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:56.810137 kernel: audit: type=1300 audit(1719327296.777:538): arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=400018d8b0 a2=78 a3=0 items=0 ppid=3712 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:56.777000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230623863343561316631663662393533376266303864393562656139 Jun 25 14:54:56.831620 kernel: audit: type=1327 audit(1719327296.777:538): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230623863343561316631663662393533376266303864393562656139 Jun 25 14:54:56.777000 audit: BPF prog-id=176 op=LOAD Jun 25 14:54:56.838442 kernel: audit: type=1334 audit(1719327296.777:539): prog-id=176 op=LOAD Jun 25 14:54:56.777000 audit[4119]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=400018d640 a2=78 a3=0 items=0 ppid=3712 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:56.859570 kernel: audit: type=1300 audit(1719327296.777:539): arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=400018d640 a2=78 a3=0 items=0 ppid=3712 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:56.777000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230623863343561316631663662393533376266303864393562656139 Jun 25 14:54:56.867841 containerd[1492]: time="2024-06-25T14:54:56.867794582Z" level=info msg="StartContainer for \"b0b8c45a1f1f6b9537bf08d95bea92f489c63eb0614a88497520d6fc93eb04e9\" returns successfully" Jun 25 14:54:56.880336 kernel: audit: type=1327 audit(1719327296.777:539): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230623863343561316631663662393533376266303864393562656139 Jun 25 14:54:56.781000 audit: BPF prog-id=176 op=UNLOAD Jun 25 14:54:56.886948 kernel: audit: type=1334 audit(1719327296.781:540): prog-id=176 op=UNLOAD Jun 25 14:54:56.781000 audit: BPF prog-id=175 op=UNLOAD Jun 25 14:54:56.892492 kernel: audit: type=1334 audit(1719327296.781:541): prog-id=175 op=UNLOAD Jun 25 14:54:56.781000 audit: BPF prog-id=177 op=LOAD Jun 25 14:54:56.898086 kernel: audit: type=1334 audit(1719327296.781:542): prog-id=177 op=LOAD Jun 25 14:54:56.781000 audit[4119]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=400018db10 a2=78 a3=0 items=0 ppid=3712 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:56.921131 kernel: audit: type=1300 audit(1719327296.781:542): arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=400018db10 a2=78 a3=0 items=0 ppid=3712 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:56.781000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230623863343561316631663662393533376266303864393562656139 Jun 25 14:54:57.161315 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jun 25 14:54:57.161448 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jun 25 14:54:58.437000 audit[4214]: AVC avc: denied { write } for pid=4214 comm="tee" name="fd" dev="proc" ino=26393 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 14:54:58.437000 audit[4214]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffffa11ea08 a2=241 a3=1b6 items=1 ppid=4187 pid=4214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:58.437000 audit: CWD cwd="/etc/service/enabled/felix/log" Jun 25 14:54:58.437000 audit: PATH item=0 name="/dev/fd/63" inode=26378 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 14:54:58.437000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 14:54:58.448000 audit[4223]: AVC avc: denied { write } for pid=4223 comm="tee" name="fd" dev="proc" ino=26407 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 14:54:58.448000 audit[4223]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffd084d9f9 a2=241 a3=1b6 items=1 ppid=4189 pid=4223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:58.448000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Jun 25 14:54:58.448000 audit: PATH item=0 name="/dev/fd/63" inode=26395 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 14:54:58.448000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 14:54:58.467000 audit[4250]: AVC avc: denied { write } for pid=4250 comm="tee" name="fd" dev="proc" ino=26775 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 14:54:58.467000 audit[4250]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffffa49c9f8 a2=241 a3=1b6 items=1 ppid=4193 pid=4250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:58.467000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Jun 25 14:54:58.467000 audit: PATH item=0 name="/dev/fd/63" inode=26772 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 14:54:58.467000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 14:54:58.471000 audit[4254]: AVC avc: denied { write } for pid=4254 comm="tee" name="fd" dev="proc" ino=26782 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 14:54:58.471000 audit[4254]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff7715a0a a2=241 a3=1b6 items=1 ppid=4199 pid=4254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:58.471000 audit: CWD cwd="/etc/service/enabled/cni/log" Jun 25 14:54:58.471000 audit: PATH item=0 name="/dev/fd/63" inode=26779 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 14:54:58.471000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 14:54:58.472000 audit[4235]: AVC avc: denied { write } for pid=4235 comm="tee" name="fd" dev="proc" ino=26423 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 14:54:58.472000 audit[4235]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff395ca09 a2=241 a3=1b6 items=1 ppid=4191 pid=4235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:58.472000 audit: CWD cwd="/etc/service/enabled/bird/log" Jun 25 14:54:58.472000 audit: PATH item=0 name="/dev/fd/63" inode=26402 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 14:54:58.472000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 14:54:58.484000 audit[4248]: AVC avc: denied { write } for pid=4248 comm="tee" name="fd" dev="proc" ino=26429 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 14:54:58.484000 audit[4248]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffd086ea08 a2=241 a3=1b6 items=1 ppid=4197 pid=4248 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:58.484000 audit: CWD cwd="/etc/service/enabled/bird6/log" Jun 25 14:54:58.484000 audit: PATH item=0 name="/dev/fd/63" inode=26760 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 14:54:58.484000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 14:54:58.499000 audit[4261]: AVC avc: denied { write } for pid=4261 comm="tee" name="fd" dev="proc" ino=26438 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 14:54:58.499000 audit[4261]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffe947da08 a2=241 a3=1b6 items=1 ppid=4219 pid=4261 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:58.499000 audit: CWD cwd="/etc/service/enabled/confd/log" Jun 25 14:54:58.499000 audit: PATH item=0 name="/dev/fd/63" inode=26433 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 14:54:58.499000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 14:54:58.783035 systemd-networkd[1255]: vxlan.calico: Link UP Jun 25 14:54:58.783050 systemd-networkd[1255]: vxlan.calico: Gained carrier Jun 25 14:54:58.796000 audit: BPF prog-id=178 op=LOAD Jun 25 14:54:58.796000 audit[4328]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe54df9c8 a2=70 a3=ffffe54dfa38 items=0 ppid=4188 pid=4328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:58.796000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 14:54:58.796000 audit: BPF prog-id=178 op=UNLOAD Jun 25 14:54:58.801000 audit: BPF prog-id=179 op=LOAD Jun 25 14:54:58.801000 audit[4328]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe54df9c8 a2=70 a3=4b243c items=0 ppid=4188 pid=4328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:58.801000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 14:54:58.801000 audit: BPF prog-id=179 op=UNLOAD Jun 25 14:54:58.801000 audit: BPF prog-id=180 op=LOAD Jun 25 14:54:58.801000 audit[4328]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffe54df968 a2=70 a3=ffffe54df9d8 items=0 ppid=4188 pid=4328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:58.801000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 14:54:58.801000 audit: BPF prog-id=180 op=UNLOAD Jun 25 14:54:58.803000 audit: BPF prog-id=181 op=LOAD Jun 25 14:54:58.803000 audit[4328]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffe54df998 a2=70 a3=12bd64a9 items=0 ppid=4188 pid=4328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:58.803000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 14:54:58.816000 audit: BPF prog-id=181 op=UNLOAD Jun 25 14:54:58.947000 audit[4356]: NETFILTER_CFG table=mangle:106 family=2 entries=16 op=nft_register_chain pid=4356 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:54:58.947000 audit[4356]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffcb7287a0 a2=0 a3=ffffa818bfa8 items=0 ppid=4188 pid=4356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:58.947000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:54:58.967000 audit[4354]: NETFILTER_CFG table=raw:107 family=2 entries=19 op=nft_register_chain pid=4354 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:54:58.967000 audit[4354]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6992 a0=3 a1=fffff8475f60 a2=0 a3=ffff8d2a0fa8 items=0 ppid=4188 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:58.967000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:54:58.968000 audit[4355]: NETFILTER_CFG table=nat:108 family=2 entries=15 op=nft_register_chain pid=4355 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:54:58.968000 audit[4355]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffe7129870 a2=0 a3=ffff9dca2fa8 items=0 ppid=4188 pid=4355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:58.968000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:54:58.969000 audit[4358]: NETFILTER_CFG table=filter:109 family=2 entries=39 op=nft_register_chain pid=4358 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:54:58.969000 audit[4358]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=18968 a0=3 a1=ffffd922a460 a2=0 a3=ffffa9587fa8 items=0 ppid=4188 pid=4358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:58.969000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:55:00.036970 systemd-networkd[1255]: vxlan.calico: Gained IPv6LL Jun 25 14:55:01.569313 containerd[1492]: time="2024-06-25T14:55:01.569248735Z" level=info msg="StopPodSandbox for \"24e620276a7978c883268a4c1afcf43a204a57123fa0cddeaaa65c4657152f7b\"" Jun 25 14:55:01.614951 kubelet[2843]: I0625 14:55:01.614359 2843 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-9lzcr" podStartSLOduration=6.568085367 podCreationTimestamp="2024-06-25 14:54:41 +0000 UTC" firstStartedPulling="2024-06-25 14:54:42.637346149 +0000 UTC m=+29.205503454" lastFinishedPulling="2024-06-25 14:54:56.683578024 +0000 UTC m=+43.251735369" observedRunningTime="2024-06-25 14:54:57.711992129 +0000 UTC m=+44.280149474" watchObservedRunningTime="2024-06-25 14:55:01.614317282 +0000 UTC m=+48.182474587" Jun 25 14:55:01.655115 containerd[1492]: 2024-06-25 14:55:01.615 [INFO][4385] k8s.go 608: Cleaning up netns ContainerID="24e620276a7978c883268a4c1afcf43a204a57123fa0cddeaaa65c4657152f7b" Jun 25 14:55:01.655115 containerd[1492]: 2024-06-25 14:55:01.615 [INFO][4385] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="24e620276a7978c883268a4c1afcf43a204a57123fa0cddeaaa65c4657152f7b" iface="eth0" netns="/var/run/netns/cni-185ff9d8-244d-4cc0-bfef-2d785474b1bf" Jun 25 14:55:01.655115 containerd[1492]: 2024-06-25 14:55:01.616 [INFO][4385] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="24e620276a7978c883268a4c1afcf43a204a57123fa0cddeaaa65c4657152f7b" iface="eth0" netns="/var/run/netns/cni-185ff9d8-244d-4cc0-bfef-2d785474b1bf" Jun 25 14:55:01.655115 containerd[1492]: 2024-06-25 14:55:01.616 [INFO][4385] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="24e620276a7978c883268a4c1afcf43a204a57123fa0cddeaaa65c4657152f7b" iface="eth0" netns="/var/run/netns/cni-185ff9d8-244d-4cc0-bfef-2d785474b1bf" Jun 25 14:55:01.655115 containerd[1492]: 2024-06-25 14:55:01.616 [INFO][4385] k8s.go 615: Releasing IP address(es) ContainerID="24e620276a7978c883268a4c1afcf43a204a57123fa0cddeaaa65c4657152f7b" Jun 25 14:55:01.655115 containerd[1492]: 2024-06-25 14:55:01.616 [INFO][4385] utils.go 188: Calico CNI releasing IP address ContainerID="24e620276a7978c883268a4c1afcf43a204a57123fa0cddeaaa65c4657152f7b" Jun 25 14:55:01.655115 containerd[1492]: 2024-06-25 14:55:01.636 [INFO][4391] ipam_plugin.go 411: Releasing address using handleID ContainerID="24e620276a7978c883268a4c1afcf43a204a57123fa0cddeaaa65c4657152f7b" HandleID="k8s-pod-network.24e620276a7978c883268a4c1afcf43a204a57123fa0cddeaaa65c4657152f7b" Workload="ci--3815.2.4--a--f605b45a38-k8s-coredns--5dd5756b68--rjfmh-eth0" Jun 25 14:55:01.655115 containerd[1492]: 2024-06-25 14:55:01.636 [INFO][4391] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:55:01.655115 containerd[1492]: 2024-06-25 14:55:01.636 [INFO][4391] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:55:01.655115 containerd[1492]: 2024-06-25 14:55:01.647 [WARNING][4391] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="24e620276a7978c883268a4c1afcf43a204a57123fa0cddeaaa65c4657152f7b" HandleID="k8s-pod-network.24e620276a7978c883268a4c1afcf43a204a57123fa0cddeaaa65c4657152f7b" Workload="ci--3815.2.4--a--f605b45a38-k8s-coredns--5dd5756b68--rjfmh-eth0" Jun 25 14:55:01.655115 containerd[1492]: 2024-06-25 14:55:01.647 [INFO][4391] ipam_plugin.go 439: Releasing address using workloadID ContainerID="24e620276a7978c883268a4c1afcf43a204a57123fa0cddeaaa65c4657152f7b" HandleID="k8s-pod-network.24e620276a7978c883268a4c1afcf43a204a57123fa0cddeaaa65c4657152f7b" Workload="ci--3815.2.4--a--f605b45a38-k8s-coredns--5dd5756b68--rjfmh-eth0" Jun 25 14:55:01.655115 containerd[1492]: 2024-06-25 14:55:01.650 [INFO][4391] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:55:01.655115 containerd[1492]: 2024-06-25 14:55:01.653 [INFO][4385] k8s.go 621: Teardown processing complete. ContainerID="24e620276a7978c883268a4c1afcf43a204a57123fa0cddeaaa65c4657152f7b" Jun 25 14:55:01.657612 containerd[1492]: time="2024-06-25T14:55:01.657441521Z" level=info msg="TearDown network for sandbox \"24e620276a7978c883268a4c1afcf43a204a57123fa0cddeaaa65c4657152f7b\" successfully" Jun 25 14:55:01.657612 containerd[1492]: time="2024-06-25T14:55:01.657485282Z" level=info msg="StopPodSandbox for \"24e620276a7978c883268a4c1afcf43a204a57123fa0cddeaaa65c4657152f7b\" returns successfully" Jun 25 14:55:01.657810 systemd[1]: run-netns-cni\x2d185ff9d8\x2d244d\x2d4cc0\x2dbfef\x2d2d785474b1bf.mount: Deactivated successfully. Jun 25 14:55:01.658869 containerd[1492]: time="2024-06-25T14:55:01.658843181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-rjfmh,Uid:0989afc6-f75f-4830-b87f-2ccfc1afc269,Namespace:kube-system,Attempt:1,}" Jun 25 14:55:02.075544 systemd-networkd[1255]: cali68f7c8aa45e: Link UP Jun 25 14:55:02.094241 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 14:55:02.094374 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali68f7c8aa45e: link becomes ready Jun 25 14:55:02.099102 systemd-networkd[1255]: cali68f7c8aa45e: Gained carrier Jun 25 14:55:02.114698 containerd[1492]: 2024-06-25 14:55:01.995 [INFO][4397] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3815.2.4--a--f605b45a38-k8s-coredns--5dd5756b68--rjfmh-eth0 coredns-5dd5756b68- kube-system 0989afc6-f75f-4830-b87f-2ccfc1afc269 801 0 2024-06-25 14:54:27 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3815.2.4-a-f605b45a38 coredns-5dd5756b68-rjfmh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali68f7c8aa45e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="71dc0af7dc4eb62666a8e8752cd135164c02662caa909f8c81e319113e215602" Namespace="kube-system" Pod="coredns-5dd5756b68-rjfmh" WorkloadEndpoint="ci--3815.2.4--a--f605b45a38-k8s-coredns--5dd5756b68--rjfmh-" Jun 25 14:55:02.114698 containerd[1492]: 2024-06-25 14:55:01.995 [INFO][4397] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="71dc0af7dc4eb62666a8e8752cd135164c02662caa909f8c81e319113e215602" Namespace="kube-system" Pod="coredns-5dd5756b68-rjfmh" WorkloadEndpoint="ci--3815.2.4--a--f605b45a38-k8s-coredns--5dd5756b68--rjfmh-eth0" Jun 25 14:55:02.114698 containerd[1492]: 2024-06-25 14:55:02.025 [INFO][4410] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="71dc0af7dc4eb62666a8e8752cd135164c02662caa909f8c81e319113e215602" HandleID="k8s-pod-network.71dc0af7dc4eb62666a8e8752cd135164c02662caa909f8c81e319113e215602" Workload="ci--3815.2.4--a--f605b45a38-k8s-coredns--5dd5756b68--rjfmh-eth0" Jun 25 14:55:02.114698 containerd[1492]: 2024-06-25 14:55:02.038 [INFO][4410] ipam_plugin.go 264: Auto assigning IP ContainerID="71dc0af7dc4eb62666a8e8752cd135164c02662caa909f8c81e319113e215602" HandleID="k8s-pod-network.71dc0af7dc4eb62666a8e8752cd135164c02662caa909f8c81e319113e215602" Workload="ci--3815.2.4--a--f605b45a38-k8s-coredns--5dd5756b68--rjfmh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ebce0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3815.2.4-a-f605b45a38", "pod":"coredns-5dd5756b68-rjfmh", "timestamp":"2024-06-25 14:55:02.025582155 +0000 UTC"}, Hostname:"ci-3815.2.4-a-f605b45a38", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 14:55:02.114698 containerd[1492]: 2024-06-25 14:55:02.038 [INFO][4410] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:55:02.114698 containerd[1492]: 2024-06-25 14:55:02.038 [INFO][4410] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:55:02.114698 containerd[1492]: 2024-06-25 14:55:02.038 [INFO][4410] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3815.2.4-a-f605b45a38' Jun 25 14:55:02.114698 containerd[1492]: 2024-06-25 14:55:02.040 [INFO][4410] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.71dc0af7dc4eb62666a8e8752cd135164c02662caa909f8c81e319113e215602" host="ci-3815.2.4-a-f605b45a38" Jun 25 14:55:02.114698 containerd[1492]: 2024-06-25 14:55:02.043 [INFO][4410] ipam.go 372: Looking up existing affinities for host host="ci-3815.2.4-a-f605b45a38" Jun 25 14:55:02.114698 containerd[1492]: 2024-06-25 14:55:02.047 [INFO][4410] ipam.go 489: Trying affinity for 192.168.19.64/26 host="ci-3815.2.4-a-f605b45a38" Jun 25 14:55:02.114698 containerd[1492]: 2024-06-25 14:55:02.049 [INFO][4410] ipam.go 155: Attempting to load block cidr=192.168.19.64/26 host="ci-3815.2.4-a-f605b45a38" Jun 25 14:55:02.114698 containerd[1492]: 2024-06-25 14:55:02.051 [INFO][4410] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.19.64/26 host="ci-3815.2.4-a-f605b45a38" Jun 25 14:55:02.114698 containerd[1492]: 2024-06-25 14:55:02.051 [INFO][4410] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.19.64/26 handle="k8s-pod-network.71dc0af7dc4eb62666a8e8752cd135164c02662caa909f8c81e319113e215602" host="ci-3815.2.4-a-f605b45a38" Jun 25 14:55:02.114698 containerd[1492]: 2024-06-25 14:55:02.052 [INFO][4410] ipam.go 1685: Creating new handle: k8s-pod-network.71dc0af7dc4eb62666a8e8752cd135164c02662caa909f8c81e319113e215602 Jun 25 14:55:02.114698 containerd[1492]: 2024-06-25 14:55:02.056 [INFO][4410] ipam.go 1203: Writing block in order to claim IPs block=192.168.19.64/26 handle="k8s-pod-network.71dc0af7dc4eb62666a8e8752cd135164c02662caa909f8c81e319113e215602" host="ci-3815.2.4-a-f605b45a38" Jun 25 14:55:02.114698 containerd[1492]: 2024-06-25 14:55:02.066 [INFO][4410] ipam.go 1216: Successfully claimed IPs: [192.168.19.65/26] block=192.168.19.64/26 handle="k8s-pod-network.71dc0af7dc4eb62666a8e8752cd135164c02662caa909f8c81e319113e215602" host="ci-3815.2.4-a-f605b45a38" Jun 25 14:55:02.114698 containerd[1492]: 2024-06-25 14:55:02.066 [INFO][4410] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.19.65/26] handle="k8s-pod-network.71dc0af7dc4eb62666a8e8752cd135164c02662caa909f8c81e319113e215602" host="ci-3815.2.4-a-f605b45a38" Jun 25 14:55:02.114698 containerd[1492]: 2024-06-25 14:55:02.066 [INFO][4410] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:55:02.114698 containerd[1492]: 2024-06-25 14:55:02.066 [INFO][4410] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.19.65/26] IPv6=[] ContainerID="71dc0af7dc4eb62666a8e8752cd135164c02662caa909f8c81e319113e215602" HandleID="k8s-pod-network.71dc0af7dc4eb62666a8e8752cd135164c02662caa909f8c81e319113e215602" Workload="ci--3815.2.4--a--f605b45a38-k8s-coredns--5dd5756b68--rjfmh-eth0" Jun 25 14:55:02.115362 containerd[1492]: 2024-06-25 14:55:02.069 [INFO][4397] k8s.go 386: Populated endpoint ContainerID="71dc0af7dc4eb62666a8e8752cd135164c02662caa909f8c81e319113e215602" Namespace="kube-system" Pod="coredns-5dd5756b68-rjfmh" WorkloadEndpoint="ci--3815.2.4--a--f605b45a38-k8s-coredns--5dd5756b68--rjfmh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--f605b45a38-k8s-coredns--5dd5756b68--rjfmh-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"0989afc6-f75f-4830-b87f-2ccfc1afc269", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 54, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-f605b45a38", ContainerID:"", Pod:"coredns-5dd5756b68-rjfmh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali68f7c8aa45e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:55:02.115362 containerd[1492]: 2024-06-25 14:55:02.069 [INFO][4397] k8s.go 387: Calico CNI using IPs: [192.168.19.65/32] ContainerID="71dc0af7dc4eb62666a8e8752cd135164c02662caa909f8c81e319113e215602" Namespace="kube-system" Pod="coredns-5dd5756b68-rjfmh" WorkloadEndpoint="ci--3815.2.4--a--f605b45a38-k8s-coredns--5dd5756b68--rjfmh-eth0" Jun 25 14:55:02.115362 containerd[1492]: 2024-06-25 14:55:02.069 [INFO][4397] dataplane_linux.go 68: Setting the host side veth name to cali68f7c8aa45e ContainerID="71dc0af7dc4eb62666a8e8752cd135164c02662caa909f8c81e319113e215602" Namespace="kube-system" Pod="coredns-5dd5756b68-rjfmh" WorkloadEndpoint="ci--3815.2.4--a--f605b45a38-k8s-coredns--5dd5756b68--rjfmh-eth0" Jun 25 14:55:02.115362 containerd[1492]: 2024-06-25 14:55:02.079 [INFO][4397] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="71dc0af7dc4eb62666a8e8752cd135164c02662caa909f8c81e319113e215602" Namespace="kube-system" Pod="coredns-5dd5756b68-rjfmh" WorkloadEndpoint="ci--3815.2.4--a--f605b45a38-k8s-coredns--5dd5756b68--rjfmh-eth0" Jun 25 14:55:02.115362 containerd[1492]: 2024-06-25 14:55:02.080 [INFO][4397] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="71dc0af7dc4eb62666a8e8752cd135164c02662caa909f8c81e319113e215602" Namespace="kube-system" Pod="coredns-5dd5756b68-rjfmh" WorkloadEndpoint="ci--3815.2.4--a--f605b45a38-k8s-coredns--5dd5756b68--rjfmh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--f605b45a38-k8s-coredns--5dd5756b68--rjfmh-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"0989afc6-f75f-4830-b87f-2ccfc1afc269", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 54, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-f605b45a38", ContainerID:"71dc0af7dc4eb62666a8e8752cd135164c02662caa909f8c81e319113e215602", Pod:"coredns-5dd5756b68-rjfmh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali68f7c8aa45e", MAC:"de:a4:96:5a:21:04", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:55:02.115362 containerd[1492]: 2024-06-25 14:55:02.106 [INFO][4397] k8s.go 500: Wrote updated endpoint to datastore ContainerID="71dc0af7dc4eb62666a8e8752cd135164c02662caa909f8c81e319113e215602" Namespace="kube-system" Pod="coredns-5dd5756b68-rjfmh" WorkloadEndpoint="ci--3815.2.4--a--f605b45a38-k8s-coredns--5dd5756b68--rjfmh-eth0" Jun 25 14:55:02.144484 containerd[1492]: time="2024-06-25T14:55:02.144372419Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:55:02.144644 containerd[1492]: time="2024-06-25T14:55:02.144447260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:55:02.144644 containerd[1492]: time="2024-06-25T14:55:02.144492981Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:55:02.144644 containerd[1492]: time="2024-06-25T14:55:02.144507501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:55:02.176976 systemd[1]: Started cri-containerd-71dc0af7dc4eb62666a8e8752cd135164c02662caa909f8c81e319113e215602.scope - libcontainer container 71dc0af7dc4eb62666a8e8752cd135164c02662caa909f8c81e319113e215602. Jun 25 14:55:02.205912 kernel: kauditd_printk_skb: 64 callbacks suppressed Jun 25 14:55:02.206046 kernel: audit: type=1334 audit(1719327302.200:562): prog-id=182 op=LOAD Jun 25 14:55:02.200000 audit: BPF prog-id=182 op=LOAD Jun 25 14:55:02.210000 audit: BPF prog-id=183 op=LOAD Jun 25 14:55:02.217430 kernel: audit: type=1334 audit(1719327302.210:563): prog-id=183 op=LOAD Jun 25 14:55:02.210000 audit[4449]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001318b0 a2=78 a3=0 items=0 ppid=4439 pid=4449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:02.239257 kernel: audit: type=1300 audit(1719327302.210:563): arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001318b0 a2=78 a3=0 items=0 ppid=4439 pid=4449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:02.240829 kernel: audit: type=1327 audit(1719327302.210:563): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731646330616637646334656236323636366138653837353263643133 Jun 25 14:55:02.210000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731646330616637646334656236323636366138653837353263643133 Jun 25 14:55:02.211000 audit: BPF prog-id=184 op=LOAD Jun 25 14:55:02.267594 kernel: audit: type=1334 audit(1719327302.211:564): prog-id=184 op=LOAD Jun 25 14:55:02.211000 audit[4449]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000131640 a2=78 a3=0 items=0 ppid=4439 pid=4449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:02.288498 kernel: audit: type=1300 audit(1719327302.211:564): arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000131640 a2=78 a3=0 items=0 ppid=4439 pid=4449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:02.211000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731646330616637646334656236323636366138653837353263643133 Jun 25 14:55:02.310234 kernel: audit: type=1327 audit(1719327302.211:564): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731646330616637646334656236323636366138653837353263643133 Jun 25 14:55:02.211000 audit: BPF prog-id=184 op=UNLOAD Jun 25 14:55:02.316803 kernel: audit: type=1334 audit(1719327302.211:565): prog-id=184 op=UNLOAD Jun 25 14:55:02.211000 audit: BPF prog-id=183 op=UNLOAD Jun 25 14:55:02.322478 kernel: audit: type=1334 audit(1719327302.211:566): prog-id=183 op=UNLOAD Jun 25 14:55:02.211000 audit: BPF prog-id=185 op=LOAD Jun 25 14:55:02.328715 kernel: audit: type=1334 audit(1719327302.211:567): prog-id=185 op=LOAD Jun 25 14:55:02.211000 audit[4449]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000131b10 a2=78 a3=0 items=0 ppid=4439 pid=4449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:02.211000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731646330616637646334656236323636366138653837353263643133 Jun 25 14:55:02.227000 audit[4468]: NETFILTER_CFG table=filter:110 family=2 entries=34 op=nft_register_chain pid=4468 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:55:02.227000 audit[4468]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19148 a0=3 a1=fffff16eff00 a2=0 a3=ffffa6c62fa8 items=0 ppid=4188 pid=4468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:02.227000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:55:02.332067 containerd[1492]: time="2024-06-25T14:55:02.332019945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-rjfmh,Uid:0989afc6-f75f-4830-b87f-2ccfc1afc269,Namespace:kube-system,Attempt:1,} returns sandbox id \"71dc0af7dc4eb62666a8e8752cd135164c02662caa909f8c81e319113e215602\"" Jun 25 14:55:02.335768 containerd[1492]: time="2024-06-25T14:55:02.335529913Z" level=info msg="CreateContainer within sandbox \"71dc0af7dc4eb62666a8e8752cd135164c02662caa909f8c81e319113e215602\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 14:55:02.391915 containerd[1492]: time="2024-06-25T14:55:02.391860043Z" level=info msg="CreateContainer within sandbox \"71dc0af7dc4eb62666a8e8752cd135164c02662caa909f8c81e319113e215602\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"aac753f705eab9580ba9c64363488d43f3f28ea8e4c624be1dba7aadbedd713a\"" Jun 25 14:55:02.392741 containerd[1492]: time="2024-06-25T14:55:02.392711215Z" level=info msg="StartContainer for \"aac753f705eab9580ba9c64363488d43f3f28ea8e4c624be1dba7aadbedd713a\"" Jun 25 14:55:02.417030 systemd[1]: Started cri-containerd-aac753f705eab9580ba9c64363488d43f3f28ea8e4c624be1dba7aadbedd713a.scope - libcontainer container aac753f705eab9580ba9c64363488d43f3f28ea8e4c624be1dba7aadbedd713a. Jun 25 14:55:02.425000 audit: BPF prog-id=186 op=LOAD Jun 25 14:55:02.426000 audit: BPF prog-id=187 op=LOAD Jun 25 14:55:02.426000 audit[4485]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=400018d8b0 a2=78 a3=0 items=0 ppid=4439 pid=4485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:02.426000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161633735336637303565616239353830626139633634333633343838 Jun 25 14:55:02.426000 audit: BPF prog-id=188 op=LOAD Jun 25 14:55:02.426000 audit[4485]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=19 a0=5 a1=400018d640 a2=78 a3=0 items=0 ppid=4439 pid=4485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:02.426000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161633735336637303565616239353830626139633634333633343838 Jun 25 14:55:02.426000 audit: BPF prog-id=188 op=UNLOAD Jun 25 14:55:02.426000 audit: BPF prog-id=187 op=UNLOAD Jun 25 14:55:02.426000 audit: BPF prog-id=189 op=LOAD Jun 25 14:55:02.426000 audit[4485]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=400018db10 a2=78 a3=0 items=0 ppid=4439 pid=4485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:02.426000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161633735336637303565616239353830626139633634333633343838 Jun 25 14:55:02.444095 containerd[1492]: time="2024-06-25T14:55:02.444020517Z" level=info msg="StartContainer for \"aac753f705eab9580ba9c64363488d43f3f28ea8e4c624be1dba7aadbedd713a\" returns successfully" Jun 25 14:55:02.535835 containerd[1492]: time="2024-06-25T14:55:02.535766691Z" level=info msg="StopPodSandbox for \"2395c577f603e4ce910c9aa27b3eca77eb862719d068cb58779890d3973e5ca3\"" Jun 25 14:55:02.631544 containerd[1492]: 2024-06-25 14:55:02.593 [INFO][4530] k8s.go 608: Cleaning up netns ContainerID="2395c577f603e4ce910c9aa27b3eca77eb862719d068cb58779890d3973e5ca3" Jun 25 14:55:02.631544 containerd[1492]: 2024-06-25 14:55:02.593 [INFO][4530] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="2395c577f603e4ce910c9aa27b3eca77eb862719d068cb58779890d3973e5ca3" iface="eth0" netns="/var/run/netns/cni-9e61bb32-be61-2add-34ba-7fff30f410f5" Jun 25 14:55:02.631544 containerd[1492]: 2024-06-25 14:55:02.594 [INFO][4530] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="2395c577f603e4ce910c9aa27b3eca77eb862719d068cb58779890d3973e5ca3" iface="eth0" netns="/var/run/netns/cni-9e61bb32-be61-2add-34ba-7fff30f410f5" Jun 25 14:55:02.631544 containerd[1492]: 2024-06-25 14:55:02.594 [INFO][4530] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="2395c577f603e4ce910c9aa27b3eca77eb862719d068cb58779890d3973e5ca3" iface="eth0" netns="/var/run/netns/cni-9e61bb32-be61-2add-34ba-7fff30f410f5" Jun 25 14:55:02.631544 containerd[1492]: 2024-06-25 14:55:02.594 [INFO][4530] k8s.go 615: Releasing IP address(es) ContainerID="2395c577f603e4ce910c9aa27b3eca77eb862719d068cb58779890d3973e5ca3" Jun 25 14:55:02.631544 containerd[1492]: 2024-06-25 14:55:02.594 [INFO][4530] utils.go 188: Calico CNI releasing IP address ContainerID="2395c577f603e4ce910c9aa27b3eca77eb862719d068cb58779890d3973e5ca3" Jun 25 14:55:02.631544 containerd[1492]: 2024-06-25 14:55:02.618 [INFO][4537] ipam_plugin.go 411: Releasing address using handleID ContainerID="2395c577f603e4ce910c9aa27b3eca77eb862719d068cb58779890d3973e5ca3" HandleID="k8s-pod-network.2395c577f603e4ce910c9aa27b3eca77eb862719d068cb58779890d3973e5ca3" Workload="ci--3815.2.4--a--f605b45a38-k8s-csi--node--driver--4gqsk-eth0" Jun 25 14:55:02.631544 containerd[1492]: 2024-06-25 14:55:02.618 [INFO][4537] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:55:02.631544 containerd[1492]: 2024-06-25 14:55:02.618 [INFO][4537] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:55:02.631544 containerd[1492]: 2024-06-25 14:55:02.626 [WARNING][4537] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="2395c577f603e4ce910c9aa27b3eca77eb862719d068cb58779890d3973e5ca3" HandleID="k8s-pod-network.2395c577f603e4ce910c9aa27b3eca77eb862719d068cb58779890d3973e5ca3" Workload="ci--3815.2.4--a--f605b45a38-k8s-csi--node--driver--4gqsk-eth0" Jun 25 14:55:02.631544 containerd[1492]: 2024-06-25 14:55:02.626 [INFO][4537] ipam_plugin.go 439: Releasing address using workloadID ContainerID="2395c577f603e4ce910c9aa27b3eca77eb862719d068cb58779890d3973e5ca3" HandleID="k8s-pod-network.2395c577f603e4ce910c9aa27b3eca77eb862719d068cb58779890d3973e5ca3" Workload="ci--3815.2.4--a--f605b45a38-k8s-csi--node--driver--4gqsk-eth0" Jun 25 14:55:02.631544 containerd[1492]: 2024-06-25 14:55:02.627 [INFO][4537] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:55:02.631544 containerd[1492]: 2024-06-25 14:55:02.629 [INFO][4530] k8s.go 621: Teardown processing complete. ContainerID="2395c577f603e4ce910c9aa27b3eca77eb862719d068cb58779890d3973e5ca3" Jun 25 14:55:02.631544 containerd[1492]: time="2024-06-25T14:55:02.630984313Z" level=info msg="TearDown network for sandbox \"2395c577f603e4ce910c9aa27b3eca77eb862719d068cb58779890d3973e5ca3\" successfully" Jun 25 14:55:02.631544 containerd[1492]: time="2024-06-25T14:55:02.631018394Z" level=info msg="StopPodSandbox for \"2395c577f603e4ce910c9aa27b3eca77eb862719d068cb58779890d3973e5ca3\" returns successfully" Jun 25 14:55:02.632198 containerd[1492]: time="2024-06-25T14:55:02.631963287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4gqsk,Uid:abc80f9d-37c5-4a3d-984d-c970bd8ec106,Namespace:calico-system,Attempt:1,}" Jun 25 14:55:02.658117 systemd[1]: run-containerd-runc-k8s.io-71dc0af7dc4eb62666a8e8752cd135164c02662caa909f8c81e319113e215602-runc.9Wphtr.mount: Deactivated successfully. Jun 25 14:55:02.658214 systemd[1]: run-netns-cni\x2d9e61bb32\x2dbe61\x2d2add\x2d34ba\x2d7fff30f410f5.mount: Deactivated successfully. Jun 25 14:55:02.725068 kubelet[2843]: I0625 14:55:02.724852 2843 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-rjfmh" podStartSLOduration=35.724775836 podCreationTimestamp="2024-06-25 14:54:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 14:55:02.724701955 +0000 UTC m=+49.292859300" watchObservedRunningTime="2024-06-25 14:55:02.724775836 +0000 UTC m=+49.292933181" Jun 25 14:55:02.756000 audit[4556]: NETFILTER_CFG table=filter:111 family=2 entries=14 op=nft_register_rule pid=4556 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:55:02.756000 audit[4556]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5164 a0=3 a1=ffffe345ad50 a2=0 a3=1 items=0 ppid=3025 pid=4556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:02.756000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:55:02.757000 audit[4556]: NETFILTER_CFG table=nat:112 family=2 entries=14 op=nft_register_rule pid=4556 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:55:02.757000 audit[4556]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffe345ad50 a2=0 a3=1 items=0 ppid=3025 pid=4556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:02.757000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:55:02.770000 audit[4563]: NETFILTER_CFG table=filter:113 family=2 entries=11 op=nft_register_rule pid=4563 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:55:02.770000 audit[4563]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=ffffe44cab80 a2=0 a3=1 items=0 ppid=3025 pid=4563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:02.770000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:55:02.772000 audit[4563]: NETFILTER_CFG table=nat:114 family=2 entries=35 op=nft_register_chain pid=4563 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:55:02.772000 audit[4563]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffe44cab80 a2=0 a3=1 items=0 ppid=3025 pid=4563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:02.772000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:55:02.823732 systemd-networkd[1255]: calib42109d96f2: Link UP Jun 25 14:55:02.832818 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calib42109d96f2: link becomes ready Jun 25 14:55:02.832845 systemd-networkd[1255]: calib42109d96f2: Gained carrier Jun 25 14:55:02.843620 containerd[1492]: 2024-06-25 14:55:02.735 [INFO][4544] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3815.2.4--a--f605b45a38-k8s-csi--node--driver--4gqsk-eth0 csi-node-driver- calico-system abc80f9d-37c5-4a3d-984d-c970bd8ec106 812 0 2024-06-25 14:54:34 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-3815.2.4-a-f605b45a38 csi-node-driver-4gqsk eth0 default [] [] [kns.calico-system ksa.calico-system.default] calib42109d96f2 [] []}} ContainerID="9eacaaac223e458a6380f44d4f089ffc62a387128ca70ecbfdf36923c42e522f" Namespace="calico-system" Pod="csi-node-driver-4gqsk" WorkloadEndpoint="ci--3815.2.4--a--f605b45a38-k8s-csi--node--driver--4gqsk-" Jun 25 14:55:02.843620 containerd[1492]: 2024-06-25 14:55:02.737 [INFO][4544] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9eacaaac223e458a6380f44d4f089ffc62a387128ca70ecbfdf36923c42e522f" Namespace="calico-system" Pod="csi-node-driver-4gqsk" WorkloadEndpoint="ci--3815.2.4--a--f605b45a38-k8s-csi--node--driver--4gqsk-eth0" Jun 25 14:55:02.843620 containerd[1492]: 2024-06-25 14:55:02.776 [INFO][4557] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9eacaaac223e458a6380f44d4f089ffc62a387128ca70ecbfdf36923c42e522f" HandleID="k8s-pod-network.9eacaaac223e458a6380f44d4f089ffc62a387128ca70ecbfdf36923c42e522f" Workload="ci--3815.2.4--a--f605b45a38-k8s-csi--node--driver--4gqsk-eth0" Jun 25 14:55:02.843620 containerd[1492]: 2024-06-25 14:55:02.787 [INFO][4557] ipam_plugin.go 264: Auto assigning IP ContainerID="9eacaaac223e458a6380f44d4f089ffc62a387128ca70ecbfdf36923c42e522f" HandleID="k8s-pod-network.9eacaaac223e458a6380f44d4f089ffc62a387128ca70ecbfdf36923c42e522f" Workload="ci--3815.2.4--a--f605b45a38-k8s-csi--node--driver--4gqsk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000286bd0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3815.2.4-a-f605b45a38", "pod":"csi-node-driver-4gqsk", "timestamp":"2024-06-25 14:55:02.776730666 +0000 UTC"}, Hostname:"ci-3815.2.4-a-f605b45a38", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 14:55:02.843620 containerd[1492]: 2024-06-25 14:55:02.788 [INFO][4557] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:55:02.843620 containerd[1492]: 2024-06-25 14:55:02.788 [INFO][4557] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:55:02.843620 containerd[1492]: 2024-06-25 14:55:02.788 [INFO][4557] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3815.2.4-a-f605b45a38' Jun 25 14:55:02.843620 containerd[1492]: 2024-06-25 14:55:02.789 [INFO][4557] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9eacaaac223e458a6380f44d4f089ffc62a387128ca70ecbfdf36923c42e522f" host="ci-3815.2.4-a-f605b45a38" Jun 25 14:55:02.843620 containerd[1492]: 2024-06-25 14:55:02.798 [INFO][4557] ipam.go 372: Looking up existing affinities for host host="ci-3815.2.4-a-f605b45a38" Jun 25 14:55:02.843620 containerd[1492]: 2024-06-25 14:55:02.802 [INFO][4557] ipam.go 489: Trying affinity for 192.168.19.64/26 host="ci-3815.2.4-a-f605b45a38" Jun 25 14:55:02.843620 containerd[1492]: 2024-06-25 14:55:02.804 [INFO][4557] ipam.go 155: Attempting to load block cidr=192.168.19.64/26 host="ci-3815.2.4-a-f605b45a38" Jun 25 14:55:02.843620 containerd[1492]: 2024-06-25 14:55:02.806 [INFO][4557] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.19.64/26 host="ci-3815.2.4-a-f605b45a38" Jun 25 14:55:02.843620 containerd[1492]: 2024-06-25 14:55:02.806 [INFO][4557] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.19.64/26 handle="k8s-pod-network.9eacaaac223e458a6380f44d4f089ffc62a387128ca70ecbfdf36923c42e522f" host="ci-3815.2.4-a-f605b45a38" Jun 25 14:55:02.843620 containerd[1492]: 2024-06-25 14:55:02.807 [INFO][4557] ipam.go 1685: Creating new handle: k8s-pod-network.9eacaaac223e458a6380f44d4f089ffc62a387128ca70ecbfdf36923c42e522f Jun 25 14:55:02.843620 containerd[1492]: 2024-06-25 14:55:02.810 [INFO][4557] ipam.go 1203: Writing block in order to claim IPs block=192.168.19.64/26 handle="k8s-pod-network.9eacaaac223e458a6380f44d4f089ffc62a387128ca70ecbfdf36923c42e522f" host="ci-3815.2.4-a-f605b45a38" Jun 25 14:55:02.843620 containerd[1492]: 2024-06-25 14:55:02.815 [INFO][4557] ipam.go 1216: Successfully claimed IPs: [192.168.19.66/26] block=192.168.19.64/26 handle="k8s-pod-network.9eacaaac223e458a6380f44d4f089ffc62a387128ca70ecbfdf36923c42e522f" host="ci-3815.2.4-a-f605b45a38" Jun 25 14:55:02.843620 containerd[1492]: 2024-06-25 14:55:02.815 [INFO][4557] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.19.66/26] handle="k8s-pod-network.9eacaaac223e458a6380f44d4f089ffc62a387128ca70ecbfdf36923c42e522f" host="ci-3815.2.4-a-f605b45a38" Jun 25 14:55:02.843620 containerd[1492]: 2024-06-25 14:55:02.816 [INFO][4557] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:55:02.843620 containerd[1492]: 2024-06-25 14:55:02.816 [INFO][4557] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.19.66/26] IPv6=[] ContainerID="9eacaaac223e458a6380f44d4f089ffc62a387128ca70ecbfdf36923c42e522f" HandleID="k8s-pod-network.9eacaaac223e458a6380f44d4f089ffc62a387128ca70ecbfdf36923c42e522f" Workload="ci--3815.2.4--a--f605b45a38-k8s-csi--node--driver--4gqsk-eth0" Jun 25 14:55:02.844240 containerd[1492]: 2024-06-25 14:55:02.817 [INFO][4544] k8s.go 386: Populated endpoint ContainerID="9eacaaac223e458a6380f44d4f089ffc62a387128ca70ecbfdf36923c42e522f" Namespace="calico-system" Pod="csi-node-driver-4gqsk" WorkloadEndpoint="ci--3815.2.4--a--f605b45a38-k8s-csi--node--driver--4gqsk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--f605b45a38-k8s-csi--node--driver--4gqsk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"abc80f9d-37c5-4a3d-984d-c970bd8ec106", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 54, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-f605b45a38", ContainerID:"", Pod:"csi-node-driver-4gqsk", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.19.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calib42109d96f2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:55:02.844240 containerd[1492]: 2024-06-25 14:55:02.817 [INFO][4544] k8s.go 387: Calico CNI using IPs: [192.168.19.66/32] ContainerID="9eacaaac223e458a6380f44d4f089ffc62a387128ca70ecbfdf36923c42e522f" Namespace="calico-system" Pod="csi-node-driver-4gqsk" WorkloadEndpoint="ci--3815.2.4--a--f605b45a38-k8s-csi--node--driver--4gqsk-eth0" Jun 25 14:55:02.844240 containerd[1492]: 2024-06-25 14:55:02.818 [INFO][4544] dataplane_linux.go 68: Setting the host side veth name to calib42109d96f2 ContainerID="9eacaaac223e458a6380f44d4f089ffc62a387128ca70ecbfdf36923c42e522f" Namespace="calico-system" Pod="csi-node-driver-4gqsk" WorkloadEndpoint="ci--3815.2.4--a--f605b45a38-k8s-csi--node--driver--4gqsk-eth0" Jun 25 14:55:02.844240 containerd[1492]: 2024-06-25 14:55:02.833 [INFO][4544] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="9eacaaac223e458a6380f44d4f089ffc62a387128ca70ecbfdf36923c42e522f" Namespace="calico-system" Pod="csi-node-driver-4gqsk" WorkloadEndpoint="ci--3815.2.4--a--f605b45a38-k8s-csi--node--driver--4gqsk-eth0" Jun 25 14:55:02.844240 containerd[1492]: 2024-06-25 14:55:02.834 [INFO][4544] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9eacaaac223e458a6380f44d4f089ffc62a387128ca70ecbfdf36923c42e522f" Namespace="calico-system" Pod="csi-node-driver-4gqsk" WorkloadEndpoint="ci--3815.2.4--a--f605b45a38-k8s-csi--node--driver--4gqsk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--f605b45a38-k8s-csi--node--driver--4gqsk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"abc80f9d-37c5-4a3d-984d-c970bd8ec106", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 54, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-f605b45a38", ContainerID:"9eacaaac223e458a6380f44d4f089ffc62a387128ca70ecbfdf36923c42e522f", Pod:"csi-node-driver-4gqsk", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.19.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calib42109d96f2", MAC:"be:d2:71:b1:17:79", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:55:02.844240 containerd[1492]: 2024-06-25 14:55:02.841 [INFO][4544] k8s.go 500: Wrote updated endpoint to datastore ContainerID="9eacaaac223e458a6380f44d4f089ffc62a387128ca70ecbfdf36923c42e522f" Namespace="calico-system" Pod="csi-node-driver-4gqsk" WorkloadEndpoint="ci--3815.2.4--a--f605b45a38-k8s-csi--node--driver--4gqsk-eth0" Jun 25 14:55:02.859000 audit[4581]: NETFILTER_CFG table=filter:115 family=2 entries=38 op=nft_register_chain pid=4581 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:55:02.859000 audit[4581]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=20336 a0=3 a1=ffffd7fc7020 a2=0 a3=ffff8484dfa8 items=0 ppid=4188 pid=4581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:02.859000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:55:02.867735 containerd[1492]: time="2024-06-25T14:55:02.867613909Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:55:02.867735 containerd[1492]: time="2024-06-25T14:55:02.867700430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:55:02.868289 containerd[1492]: time="2024-06-25T14:55:02.868232517Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:55:02.868289 containerd[1492]: time="2024-06-25T14:55:02.868266238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:55:02.889946 systemd[1]: Started cri-containerd-9eacaaac223e458a6380f44d4f089ffc62a387128ca70ecbfdf36923c42e522f.scope - libcontainer container 9eacaaac223e458a6380f44d4f089ffc62a387128ca70ecbfdf36923c42e522f. Jun 25 14:55:02.897000 audit: BPF prog-id=190 op=LOAD Jun 25 14:55:02.898000 audit: BPF prog-id=191 op=LOAD Jun 25 14:55:02.898000 audit[4598]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018d8b0 a2=78 a3=0 items=0 ppid=4589 pid=4598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:02.898000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965616361616163323233653435386136333830663434643466303839 Jun 25 14:55:02.898000 audit: BPF prog-id=192 op=LOAD Jun 25 14:55:02.898000 audit[4598]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400018d640 a2=78 a3=0 items=0 ppid=4589 pid=4598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:02.898000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965616361616163323233653435386136333830663434643466303839 Jun 25 14:55:02.898000 audit: BPF prog-id=192 op=UNLOAD Jun 25 14:55:02.899000 audit: BPF prog-id=191 op=UNLOAD Jun 25 14:55:02.899000 audit: BPF prog-id=193 op=LOAD Jun 25 14:55:02.899000 audit[4598]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018db10 a2=78 a3=0 items=0 ppid=4589 pid=4598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:02.899000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965616361616163323233653435386136333830663434643466303839 Jun 25 14:55:02.910872 containerd[1492]: time="2024-06-25T14:55:02.910828900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4gqsk,Uid:abc80f9d-37c5-4a3d-984d-c970bd8ec106,Namespace:calico-system,Attempt:1,} returns sandbox id \"9eacaaac223e458a6380f44d4f089ffc62a387128ca70ecbfdf36923c42e522f\"" Jun 25 14:55:02.913079 containerd[1492]: time="2024-06-25T14:55:02.912646205Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jun 25 14:55:03.456761 kubelet[2843]: I0625 14:55:03.456552 2843 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 14:55:03.657465 systemd[1]: run-containerd-runc-k8s.io-9eacaaac223e458a6380f44d4f089ffc62a387128ca70ecbfdf36923c42e522f-runc.emgWZS.mount: Deactivated successfully. Jun 25 14:55:03.876990 systemd-networkd[1255]: cali68f7c8aa45e: Gained IPv6LL Jun 25 14:55:04.388955 systemd-networkd[1255]: calib42109d96f2: Gained IPv6LL Jun 25 14:55:04.535176 containerd[1492]: time="2024-06-25T14:55:04.535115767Z" level=info msg="StopPodSandbox for \"51bd1b9328699dcf9216804ea054d5a23e55aad230c965553508486924fe18c2\"" Jun 25 14:55:04.612089 containerd[1492]: 2024-06-25 14:55:04.577 [INFO][4688] k8s.go 608: Cleaning up netns ContainerID="51bd1b9328699dcf9216804ea054d5a23e55aad230c965553508486924fe18c2" Jun 25 14:55:04.612089 containerd[1492]: 2024-06-25 14:55:04.577 [INFO][4688] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="51bd1b9328699dcf9216804ea054d5a23e55aad230c965553508486924fe18c2" iface="eth0" netns="/var/run/netns/cni-fe979a57-b92e-abb4-5896-ab7c2ceac3ce" Jun 25 14:55:04.612089 containerd[1492]: 2024-06-25 14:55:04.577 [INFO][4688] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="51bd1b9328699dcf9216804ea054d5a23e55aad230c965553508486924fe18c2" iface="eth0" netns="/var/run/netns/cni-fe979a57-b92e-abb4-5896-ab7c2ceac3ce" Jun 25 14:55:04.612089 containerd[1492]: 2024-06-25 14:55:04.577 [INFO][4688] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="51bd1b9328699dcf9216804ea054d5a23e55aad230c965553508486924fe18c2" iface="eth0" netns="/var/run/netns/cni-fe979a57-b92e-abb4-5896-ab7c2ceac3ce" Jun 25 14:55:04.612089 containerd[1492]: 2024-06-25 14:55:04.578 [INFO][4688] k8s.go 615: Releasing IP address(es) ContainerID="51bd1b9328699dcf9216804ea054d5a23e55aad230c965553508486924fe18c2" Jun 25 14:55:04.612089 containerd[1492]: 2024-06-25 14:55:04.578 [INFO][4688] utils.go 188: Calico CNI releasing IP address ContainerID="51bd1b9328699dcf9216804ea054d5a23e55aad230c965553508486924fe18c2" Jun 25 14:55:04.612089 containerd[1492]: 2024-06-25 14:55:04.598 [INFO][4694] ipam_plugin.go 411: Releasing address using handleID ContainerID="51bd1b9328699dcf9216804ea054d5a23e55aad230c965553508486924fe18c2" HandleID="k8s-pod-network.51bd1b9328699dcf9216804ea054d5a23e55aad230c965553508486924fe18c2" Workload="ci--3815.2.4--a--f605b45a38-k8s-coredns--5dd5756b68--qpwfn-eth0" Jun 25 14:55:04.612089 containerd[1492]: 2024-06-25 14:55:04.598 [INFO][4694] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:55:04.612089 containerd[1492]: 2024-06-25 14:55:04.598 [INFO][4694] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:55:04.612089 containerd[1492]: 2024-06-25 14:55:04.607 [WARNING][4694] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="51bd1b9328699dcf9216804ea054d5a23e55aad230c965553508486924fe18c2" HandleID="k8s-pod-network.51bd1b9328699dcf9216804ea054d5a23e55aad230c965553508486924fe18c2" Workload="ci--3815.2.4--a--f605b45a38-k8s-coredns--5dd5756b68--qpwfn-eth0" Jun 25 14:55:04.612089 containerd[1492]: 2024-06-25 14:55:04.607 [INFO][4694] ipam_plugin.go 439: Releasing address using workloadID ContainerID="51bd1b9328699dcf9216804ea054d5a23e55aad230c965553508486924fe18c2" HandleID="k8s-pod-network.51bd1b9328699dcf9216804ea054d5a23e55aad230c965553508486924fe18c2" Workload="ci--3815.2.4--a--f605b45a38-k8s-coredns--5dd5756b68--qpwfn-eth0" Jun 25 14:55:04.612089 containerd[1492]: 2024-06-25 14:55:04.609 [INFO][4694] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:55:04.612089 containerd[1492]: 2024-06-25 14:55:04.610 [INFO][4688] k8s.go 621: Teardown processing complete. ContainerID="51bd1b9328699dcf9216804ea054d5a23e55aad230c965553508486924fe18c2" Jun 25 14:55:04.614429 systemd[1]: run-netns-cni\x2dfe979a57\x2db92e\x2dabb4\x2d5896\x2dab7c2ceac3ce.mount: Deactivated successfully. Jun 25 14:55:04.615769 containerd[1492]: time="2024-06-25T14:55:04.615267868Z" level=info msg="TearDown network for sandbox \"51bd1b9328699dcf9216804ea054d5a23e55aad230c965553508486924fe18c2\" successfully" Jun 25 14:55:04.615769 containerd[1492]: time="2024-06-25T14:55:04.615314908Z" level=info msg="StopPodSandbox for \"51bd1b9328699dcf9216804ea054d5a23e55aad230c965553508486924fe18c2\" returns successfully" Jun 25 14:55:04.616249 containerd[1492]: time="2024-06-25T14:55:04.616210720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-qpwfn,Uid:9e6a8668-a98b-4401-b848-2bc30cd2cac6,Namespace:kube-system,Attempt:1,}" Jun 25 14:55:04.774344 systemd-networkd[1255]: cali4e5d32d7d00: Link UP Jun 25 14:55:04.785772 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 14:55:04.785988 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali4e5d32d7d00: link becomes ready Jun 25 14:55:04.788015 systemd-networkd[1255]: cali4e5d32d7d00: Gained carrier Jun 25 14:55:04.798639 containerd[1492]: 2024-06-25 14:55:04.693 [INFO][4700] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3815.2.4--a--f605b45a38-k8s-coredns--5dd5756b68--qpwfn-eth0 coredns-5dd5756b68- kube-system 9e6a8668-a98b-4401-b848-2bc30cd2cac6 832 0 2024-06-25 14:54:27 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3815.2.4-a-f605b45a38 coredns-5dd5756b68-qpwfn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4e5d32d7d00 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="ec35da7eaeced14dee9dbe0eafd6974e7711835742dfb4a4d95f322b93f39981" Namespace="kube-system" Pod="coredns-5dd5756b68-qpwfn" WorkloadEndpoint="ci--3815.2.4--a--f605b45a38-k8s-coredns--5dd5756b68--qpwfn-" Jun 25 14:55:04.798639 containerd[1492]: 2024-06-25 14:55:04.693 [INFO][4700] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ec35da7eaeced14dee9dbe0eafd6974e7711835742dfb4a4d95f322b93f39981" Namespace="kube-system" Pod="coredns-5dd5756b68-qpwfn" WorkloadEndpoint="ci--3815.2.4--a--f605b45a38-k8s-coredns--5dd5756b68--qpwfn-eth0" Jun 25 14:55:04.798639 containerd[1492]: 2024-06-25 14:55:04.724 [INFO][4713] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ec35da7eaeced14dee9dbe0eafd6974e7711835742dfb4a4d95f322b93f39981" HandleID="k8s-pod-network.ec35da7eaeced14dee9dbe0eafd6974e7711835742dfb4a4d95f322b93f39981" Workload="ci--3815.2.4--a--f605b45a38-k8s-coredns--5dd5756b68--qpwfn-eth0" Jun 25 14:55:04.798639 containerd[1492]: 2024-06-25 14:55:04.738 [INFO][4713] ipam_plugin.go 264: Auto assigning IP ContainerID="ec35da7eaeced14dee9dbe0eafd6974e7711835742dfb4a4d95f322b93f39981" HandleID="k8s-pod-network.ec35da7eaeced14dee9dbe0eafd6974e7711835742dfb4a4d95f322b93f39981" Workload="ci--3815.2.4--a--f605b45a38-k8s-coredns--5dd5756b68--qpwfn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000265e20), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3815.2.4-a-f605b45a38", "pod":"coredns-5dd5756b68-qpwfn", "timestamp":"2024-06-25 14:55:04.72427835 +0000 UTC"}, Hostname:"ci-3815.2.4-a-f605b45a38", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 14:55:04.798639 containerd[1492]: 2024-06-25 14:55:04.738 [INFO][4713] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:55:04.798639 containerd[1492]: 2024-06-25 14:55:04.738 [INFO][4713] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:55:04.798639 containerd[1492]: 2024-06-25 14:55:04.738 [INFO][4713] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3815.2.4-a-f605b45a38' Jun 25 14:55:04.798639 containerd[1492]: 2024-06-25 14:55:04.740 [INFO][4713] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ec35da7eaeced14dee9dbe0eafd6974e7711835742dfb4a4d95f322b93f39981" host="ci-3815.2.4-a-f605b45a38" Jun 25 14:55:04.798639 containerd[1492]: 2024-06-25 14:55:04.744 [INFO][4713] ipam.go 372: Looking up existing affinities for host host="ci-3815.2.4-a-f605b45a38" Jun 25 14:55:04.798639 containerd[1492]: 2024-06-25 14:55:04.749 [INFO][4713] ipam.go 489: Trying affinity for 192.168.19.64/26 host="ci-3815.2.4-a-f605b45a38" Jun 25 14:55:04.798639 containerd[1492]: 2024-06-25 14:55:04.751 [INFO][4713] ipam.go 155: Attempting to load block cidr=192.168.19.64/26 host="ci-3815.2.4-a-f605b45a38" Jun 25 14:55:04.798639 containerd[1492]: 2024-06-25 14:55:04.753 [INFO][4713] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.19.64/26 host="ci-3815.2.4-a-f605b45a38" Jun 25 14:55:04.798639 containerd[1492]: 2024-06-25 14:55:04.754 [INFO][4713] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.19.64/26 handle="k8s-pod-network.ec35da7eaeced14dee9dbe0eafd6974e7711835742dfb4a4d95f322b93f39981" host="ci-3815.2.4-a-f605b45a38" Jun 25 14:55:04.798639 containerd[1492]: 2024-06-25 14:55:04.756 [INFO][4713] ipam.go 1685: Creating new handle: k8s-pod-network.ec35da7eaeced14dee9dbe0eafd6974e7711835742dfb4a4d95f322b93f39981 Jun 25 14:55:04.798639 containerd[1492]: 2024-06-25 14:55:04.759 [INFO][4713] ipam.go 1203: Writing block in order to claim IPs block=192.168.19.64/26 handle="k8s-pod-network.ec35da7eaeced14dee9dbe0eafd6974e7711835742dfb4a4d95f322b93f39981" host="ci-3815.2.4-a-f605b45a38" Jun 25 14:55:04.798639 containerd[1492]: 2024-06-25 14:55:04.767 [INFO][4713] ipam.go 1216: Successfully claimed IPs: [192.168.19.67/26] block=192.168.19.64/26 handle="k8s-pod-network.ec35da7eaeced14dee9dbe0eafd6974e7711835742dfb4a4d95f322b93f39981" host="ci-3815.2.4-a-f605b45a38" Jun 25 14:55:04.798639 containerd[1492]: 2024-06-25 14:55:04.768 [INFO][4713] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.19.67/26] handle="k8s-pod-network.ec35da7eaeced14dee9dbe0eafd6974e7711835742dfb4a4d95f322b93f39981" host="ci-3815.2.4-a-f605b45a38" Jun 25 14:55:04.798639 containerd[1492]: 2024-06-25 14:55:04.768 [INFO][4713] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:55:04.798639 containerd[1492]: 2024-06-25 14:55:04.768 [INFO][4713] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.19.67/26] IPv6=[] ContainerID="ec35da7eaeced14dee9dbe0eafd6974e7711835742dfb4a4d95f322b93f39981" HandleID="k8s-pod-network.ec35da7eaeced14dee9dbe0eafd6974e7711835742dfb4a4d95f322b93f39981" Workload="ci--3815.2.4--a--f605b45a38-k8s-coredns--5dd5756b68--qpwfn-eth0" Jun 25 14:55:04.799255 containerd[1492]: 2024-06-25 14:55:04.770 [INFO][4700] k8s.go 386: Populated endpoint ContainerID="ec35da7eaeced14dee9dbe0eafd6974e7711835742dfb4a4d95f322b93f39981" Namespace="kube-system" Pod="coredns-5dd5756b68-qpwfn" WorkloadEndpoint="ci--3815.2.4--a--f605b45a38-k8s-coredns--5dd5756b68--qpwfn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--f605b45a38-k8s-coredns--5dd5756b68--qpwfn-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"9e6a8668-a98b-4401-b848-2bc30cd2cac6", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 54, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-f605b45a38", ContainerID:"", Pod:"coredns-5dd5756b68-qpwfn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4e5d32d7d00", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:55:04.799255 containerd[1492]: 2024-06-25 14:55:04.770 [INFO][4700] k8s.go 387: Calico CNI using IPs: [192.168.19.67/32] ContainerID="ec35da7eaeced14dee9dbe0eafd6974e7711835742dfb4a4d95f322b93f39981" Namespace="kube-system" Pod="coredns-5dd5756b68-qpwfn" WorkloadEndpoint="ci--3815.2.4--a--f605b45a38-k8s-coredns--5dd5756b68--qpwfn-eth0" Jun 25 14:55:04.799255 containerd[1492]: 2024-06-25 14:55:04.770 [INFO][4700] dataplane_linux.go 68: Setting the host side veth name to cali4e5d32d7d00 ContainerID="ec35da7eaeced14dee9dbe0eafd6974e7711835742dfb4a4d95f322b93f39981" Namespace="kube-system" Pod="coredns-5dd5756b68-qpwfn" WorkloadEndpoint="ci--3815.2.4--a--f605b45a38-k8s-coredns--5dd5756b68--qpwfn-eth0" Jun 25 14:55:04.799255 containerd[1492]: 2024-06-25 14:55:04.787 [INFO][4700] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="ec35da7eaeced14dee9dbe0eafd6974e7711835742dfb4a4d95f322b93f39981" Namespace="kube-system" Pod="coredns-5dd5756b68-qpwfn" WorkloadEndpoint="ci--3815.2.4--a--f605b45a38-k8s-coredns--5dd5756b68--qpwfn-eth0" Jun 25 14:55:04.799255 containerd[1492]: 2024-06-25 14:55:04.788 [INFO][4700] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ec35da7eaeced14dee9dbe0eafd6974e7711835742dfb4a4d95f322b93f39981" Namespace="kube-system" Pod="coredns-5dd5756b68-qpwfn" WorkloadEndpoint="ci--3815.2.4--a--f605b45a38-k8s-coredns--5dd5756b68--qpwfn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--f605b45a38-k8s-coredns--5dd5756b68--qpwfn-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"9e6a8668-a98b-4401-b848-2bc30cd2cac6", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 54, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-f605b45a38", ContainerID:"ec35da7eaeced14dee9dbe0eafd6974e7711835742dfb4a4d95f322b93f39981", Pod:"coredns-5dd5756b68-qpwfn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4e5d32d7d00", MAC:"9a:3a:4e:c8:85:7f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:55:04.799255 containerd[1492]: 2024-06-25 14:55:04.796 [INFO][4700] k8s.go 500: Wrote updated endpoint to datastore ContainerID="ec35da7eaeced14dee9dbe0eafd6974e7711835742dfb4a4d95f322b93f39981" Namespace="kube-system" Pod="coredns-5dd5756b68-qpwfn" WorkloadEndpoint="ci--3815.2.4--a--f605b45a38-k8s-coredns--5dd5756b68--qpwfn-eth0" Jun 25 14:55:04.811000 audit[4732]: NETFILTER_CFG table=filter:116 family=2 entries=34 op=nft_register_chain pid=4732 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:55:04.811000 audit[4732]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=18220 a0=3 a1=ffffe2a5c8e0 a2=0 a3=ffff8abbcfa8 items=0 ppid=4188 pid=4732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:04.811000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:55:04.829768 containerd[1492]: time="2024-06-25T14:55:04.829662424Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:55:04.830248 containerd[1492]: time="2024-06-25T14:55:04.830039989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:55:04.830248 containerd[1492]: time="2024-06-25T14:55:04.830089230Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:55:04.830248 containerd[1492]: time="2024-06-25T14:55:04.830104190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:55:04.850094 systemd[1]: Started cri-containerd-ec35da7eaeced14dee9dbe0eafd6974e7711835742dfb4a4d95f322b93f39981.scope - libcontainer container ec35da7eaeced14dee9dbe0eafd6974e7711835742dfb4a4d95f322b93f39981. Jun 25 14:55:04.861000 audit: BPF prog-id=194 op=LOAD Jun 25 14:55:04.862000 audit: BPF prog-id=195 op=LOAD Jun 25 14:55:04.862000 audit[4751]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018d8b0 a2=78 a3=0 items=0 ppid=4741 pid=4751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:04.862000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563333564613765616563656431346465653964626530656166643639 Jun 25 14:55:04.862000 audit: BPF prog-id=196 op=LOAD Jun 25 14:55:04.862000 audit[4751]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400018d640 a2=78 a3=0 items=0 ppid=4741 pid=4751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:04.862000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563333564613765616563656431346465653964626530656166643639 Jun 25 14:55:04.862000 audit: BPF prog-id=196 op=UNLOAD Jun 25 14:55:04.863000 audit: BPF prog-id=195 op=UNLOAD Jun 25 14:55:04.863000 audit: BPF prog-id=197 op=LOAD Jun 25 14:55:04.863000 audit[4751]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018db10 a2=78 a3=0 items=0 ppid=4741 pid=4751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:04.863000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563333564613765616563656431346465653964626530656166643639 Jun 25 14:55:04.883756 containerd[1492]: time="2024-06-25T14:55:04.883710899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-qpwfn,Uid:9e6a8668-a98b-4401-b848-2bc30cd2cac6,Namespace:kube-system,Attempt:1,} returns sandbox id \"ec35da7eaeced14dee9dbe0eafd6974e7711835742dfb4a4d95f322b93f39981\"" Jun 25 14:55:04.887795 containerd[1492]: time="2024-06-25T14:55:04.887742672Z" level=info msg="CreateContainer within sandbox \"ec35da7eaeced14dee9dbe0eafd6974e7711835742dfb4a4d95f322b93f39981\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 14:55:04.929146 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2624227462.mount: Deactivated successfully. Jun 25 14:55:04.944669 containerd[1492]: time="2024-06-25T14:55:04.944622265Z" level=info msg="CreateContainer within sandbox \"ec35da7eaeced14dee9dbe0eafd6974e7711835742dfb4a4d95f322b93f39981\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"68b20c227c974ad7e78dd3456b728f0cca942f80a1301bf68baf23ec62dde5c3\"" Jun 25 14:55:04.946360 containerd[1492]: time="2024-06-25T14:55:04.946319167Z" level=info msg="StartContainer for \"68b20c227c974ad7e78dd3456b728f0cca942f80a1301bf68baf23ec62dde5c3\"" Jun 25 14:55:04.967965 systemd[1]: Started cri-containerd-68b20c227c974ad7e78dd3456b728f0cca942f80a1301bf68baf23ec62dde5c3.scope - libcontainer container 68b20c227c974ad7e78dd3456b728f0cca942f80a1301bf68baf23ec62dde5c3. Jun 25 14:55:04.977000 audit: BPF prog-id=198 op=LOAD Jun 25 14:55:04.977000 audit: BPF prog-id=199 op=LOAD Jun 25 14:55:04.977000 audit[4785]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018d8b0 a2=78 a3=0 items=0 ppid=4741 pid=4785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:04.977000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638623230633232376339373461643765373864643334353662373238 Jun 25 14:55:04.977000 audit: BPF prog-id=200 op=LOAD Jun 25 14:55:04.977000 audit[4785]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400018d640 a2=78 a3=0 items=0 ppid=4741 pid=4785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:04.977000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638623230633232376339373461643765373864643334353662373238 Jun 25 14:55:04.977000 audit: BPF prog-id=200 op=UNLOAD Jun 25 14:55:04.977000 audit: BPF prog-id=199 op=UNLOAD Jun 25 14:55:04.977000 audit: BPF prog-id=201 op=LOAD Jun 25 14:55:04.977000 audit[4785]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018db10 a2=78 a3=0 items=0 ppid=4741 pid=4785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:04.977000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638623230633232376339373461643765373864643334353662373238 Jun 25 14:55:04.996614 containerd[1492]: time="2024-06-25T14:55:04.996459711Z" level=info msg="StartContainer for \"68b20c227c974ad7e78dd3456b728f0cca942f80a1301bf68baf23ec62dde5c3\" returns successfully" Jun 25 14:55:05.232399 containerd[1492]: time="2024-06-25T14:55:05.230651920Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:55:05.234436 containerd[1492]: time="2024-06-25T14:55:05.234400889Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7210579" Jun 25 14:55:05.240345 containerd[1492]: time="2024-06-25T14:55:05.240311326Z" level=info msg="ImageCreate event name:\"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:55:05.244728 containerd[1492]: time="2024-06-25T14:55:05.244687183Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:55:05.253666 containerd[1492]: time="2024-06-25T14:55:05.253623899Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:55:05.254377 containerd[1492]: time="2024-06-25T14:55:05.254332869Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"8577147\" in 2.341648263s" Jun 25 14:55:05.254458 containerd[1492]: time="2024-06-25T14:55:05.254372869Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\"" Jun 25 14:55:05.256487 containerd[1492]: time="2024-06-25T14:55:05.256458296Z" level=info msg="CreateContainer within sandbox \"9eacaaac223e458a6380f44d4f089ffc62a387128ca70ecbfdf36923c42e522f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jun 25 14:55:05.301499 containerd[1492]: time="2024-06-25T14:55:05.301442842Z" level=info msg="CreateContainer within sandbox \"9eacaaac223e458a6380f44d4f089ffc62a387128ca70ecbfdf36923c42e522f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"c14560e81485a41fac05011db6f1b5bf1e4ab28fa3b069babc3c3135b000c7d1\"" Jun 25 14:55:05.302434 containerd[1492]: time="2024-06-25T14:55:05.302375254Z" level=info msg="StartContainer for \"c14560e81485a41fac05011db6f1b5bf1e4ab28fa3b069babc3c3135b000c7d1\"" Jun 25 14:55:05.326967 systemd[1]: Started cri-containerd-c14560e81485a41fac05011db6f1b5bf1e4ab28fa3b069babc3c3135b000c7d1.scope - libcontainer container c14560e81485a41fac05011db6f1b5bf1e4ab28fa3b069babc3c3135b000c7d1. Jun 25 14:55:05.342000 audit: BPF prog-id=202 op=LOAD Jun 25 14:55:05.342000 audit[4826]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=40001318b0 a2=78 a3=0 items=0 ppid=4589 pid=4826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:05.342000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331343536306538313438356134316661633035303131646236663162 Jun 25 14:55:05.342000 audit: BPF prog-id=203 op=LOAD Jun 25 14:55:05.342000 audit[4826]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=4000131640 a2=78 a3=0 items=0 ppid=4589 pid=4826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:05.342000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331343536306538313438356134316661633035303131646236663162 Jun 25 14:55:05.342000 audit: BPF prog-id=203 op=UNLOAD Jun 25 14:55:05.342000 audit: BPF prog-id=202 op=UNLOAD Jun 25 14:55:05.342000 audit: BPF prog-id=204 op=LOAD Jun 25 14:55:05.342000 audit[4826]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=4000131b10 a2=78 a3=0 items=0 ppid=4589 pid=4826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:05.342000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331343536306538313438356134316661633035303131646236663162 Jun 25 14:55:05.360353 containerd[1492]: time="2024-06-25T14:55:05.360300368Z" level=info msg="StartContainer for \"c14560e81485a41fac05011db6f1b5bf1e4ab28fa3b069babc3c3135b000c7d1\" returns successfully" Jun 25 14:55:05.361697 containerd[1492]: time="2024-06-25T14:55:05.361665306Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jun 25 14:55:05.758160 kubelet[2843]: I0625 14:55:05.758109 2843 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-qpwfn" podStartSLOduration=38.758057986 podCreationTimestamp="2024-06-25 14:54:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 14:55:05.741912336 +0000 UTC m=+52.310069681" watchObservedRunningTime="2024-06-25 14:55:05.758057986 +0000 UTC m=+52.326215331" Jun 25 14:55:05.776000 audit[4854]: NETFILTER_CFG table=filter:117 family=2 entries=8 op=nft_register_rule pid=4854 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:55:05.776000 audit[4854]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=fffff34223f0 a2=0 a3=1 items=0 ppid=3025 pid=4854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:05.776000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:55:05.779000 audit[4854]: NETFILTER_CFG table=nat:118 family=2 entries=44 op=nft_register_rule pid=4854 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:55:05.779000 audit[4854]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=fffff34223f0 a2=0 a3=1 items=0 ppid=3025 pid=4854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:05.779000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:55:05.790000 audit[4856]: NETFILTER_CFG table=filter:119 family=2 entries=8 op=nft_register_rule pid=4856 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:55:05.790000 audit[4856]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=ffffe038b0e0 a2=0 a3=1 items=0 ppid=3025 pid=4856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:05.790000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:55:05.796000 audit[4856]: NETFILTER_CFG table=nat:120 family=2 entries=56 op=nft_register_chain pid=4856 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:55:05.796000 audit[4856]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffe038b0e0 a2=0 a3=1 items=0 ppid=3025 pid=4856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:05.796000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:55:05.835521 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3632584355.mount: Deactivated successfully. Jun 25 14:55:06.501049 systemd-networkd[1255]: cali4e5d32d7d00: Gained IPv6LL Jun 25 14:55:06.535506 containerd[1492]: time="2024-06-25T14:55:06.535458237Z" level=info msg="StopPodSandbox for \"0c57418be5c2e7fd0275ef2e8991ff70718ee7a00a8e8c5f36033d0092c61ef5\"" Jun 25 14:55:06.630684 containerd[1492]: 2024-06-25 14:55:06.587 [INFO][4873] k8s.go 608: Cleaning up netns ContainerID="0c57418be5c2e7fd0275ef2e8991ff70718ee7a00a8e8c5f36033d0092c61ef5" Jun 25 14:55:06.630684 containerd[1492]: 2024-06-25 14:55:06.588 [INFO][4873] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="0c57418be5c2e7fd0275ef2e8991ff70718ee7a00a8e8c5f36033d0092c61ef5" iface="eth0" netns="/var/run/netns/cni-17eaaa10-6952-4b8c-9db4-3fccf48b5d62" Jun 25 14:55:06.630684 containerd[1492]: 2024-06-25 14:55:06.588 [INFO][4873] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="0c57418be5c2e7fd0275ef2e8991ff70718ee7a00a8e8c5f36033d0092c61ef5" iface="eth0" netns="/var/run/netns/cni-17eaaa10-6952-4b8c-9db4-3fccf48b5d62" Jun 25 14:55:06.630684 containerd[1492]: 2024-06-25 14:55:06.588 [INFO][4873] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="0c57418be5c2e7fd0275ef2e8991ff70718ee7a00a8e8c5f36033d0092c61ef5" iface="eth0" netns="/var/run/netns/cni-17eaaa10-6952-4b8c-9db4-3fccf48b5d62" Jun 25 14:55:06.630684 containerd[1492]: 2024-06-25 14:55:06.588 [INFO][4873] k8s.go 615: Releasing IP address(es) ContainerID="0c57418be5c2e7fd0275ef2e8991ff70718ee7a00a8e8c5f36033d0092c61ef5" Jun 25 14:55:06.630684 containerd[1492]: 2024-06-25 14:55:06.589 [INFO][4873] utils.go 188: Calico CNI releasing IP address ContainerID="0c57418be5c2e7fd0275ef2e8991ff70718ee7a00a8e8c5f36033d0092c61ef5" Jun 25 14:55:06.630684 containerd[1492]: 2024-06-25 14:55:06.617 [INFO][4879] ipam_plugin.go 411: Releasing address using handleID ContainerID="0c57418be5c2e7fd0275ef2e8991ff70718ee7a00a8e8c5f36033d0092c61ef5" HandleID="k8s-pod-network.0c57418be5c2e7fd0275ef2e8991ff70718ee7a00a8e8c5f36033d0092c61ef5" Workload="ci--3815.2.4--a--f605b45a38-k8s-calico--kube--controllers--564f6c74f7--tqbql-eth0" Jun 25 14:55:06.630684 containerd[1492]: 2024-06-25 14:55:06.617 [INFO][4879] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:55:06.630684 containerd[1492]: 2024-06-25 14:55:06.617 [INFO][4879] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:55:06.630684 containerd[1492]: 2024-06-25 14:55:06.626 [WARNING][4879] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="0c57418be5c2e7fd0275ef2e8991ff70718ee7a00a8e8c5f36033d0092c61ef5" HandleID="k8s-pod-network.0c57418be5c2e7fd0275ef2e8991ff70718ee7a00a8e8c5f36033d0092c61ef5" Workload="ci--3815.2.4--a--f605b45a38-k8s-calico--kube--controllers--564f6c74f7--tqbql-eth0" Jun 25 14:55:06.630684 containerd[1492]: 2024-06-25 14:55:06.626 [INFO][4879] ipam_plugin.go 439: Releasing address using workloadID ContainerID="0c57418be5c2e7fd0275ef2e8991ff70718ee7a00a8e8c5f36033d0092c61ef5" HandleID="k8s-pod-network.0c57418be5c2e7fd0275ef2e8991ff70718ee7a00a8e8c5f36033d0092c61ef5" Workload="ci--3815.2.4--a--f605b45a38-k8s-calico--kube--controllers--564f6c74f7--tqbql-eth0" Jun 25 14:55:06.630684 containerd[1492]: 2024-06-25 14:55:06.627 [INFO][4879] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:55:06.630684 containerd[1492]: 2024-06-25 14:55:06.629 [INFO][4873] k8s.go 621: Teardown processing complete. ContainerID="0c57418be5c2e7fd0275ef2e8991ff70718ee7a00a8e8c5f36033d0092c61ef5" Jun 25 14:55:06.633446 systemd[1]: run-netns-cni\x2d17eaaa10\x2d6952\x2d4b8c\x2d9db4\x2d3fccf48b5d62.mount: Deactivated successfully. Jun 25 14:55:06.634522 containerd[1492]: time="2024-06-25T14:55:06.634021780Z" level=info msg="TearDown network for sandbox \"0c57418be5c2e7fd0275ef2e8991ff70718ee7a00a8e8c5f36033d0092c61ef5\" successfully" Jun 25 14:55:06.634522 containerd[1492]: time="2024-06-25T14:55:06.634113141Z" level=info msg="StopPodSandbox for \"0c57418be5c2e7fd0275ef2e8991ff70718ee7a00a8e8c5f36033d0092c61ef5\" returns successfully" Jun 25 14:55:06.635372 containerd[1492]: time="2024-06-25T14:55:06.635331676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-564f6c74f7-tqbql,Uid:b14e02c4-5bcb-42cf-ac77-040a296222aa,Namespace:calico-system,Attempt:1,}" Jun 25 14:55:07.868055 containerd[1492]: time="2024-06-25T14:55:07.868012499Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:55:07.870978 containerd[1492]: time="2024-06-25T14:55:07.870939416Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=9548567" Jun 25 14:55:07.875261 containerd[1492]: time="2024-06-25T14:55:07.875221110Z" level=info msg="ImageCreate event name:\"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:55:07.882375 containerd[1492]: time="2024-06-25T14:55:07.882335759Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:55:07.890692 systemd-networkd[1255]: cali087f22294e6: Link UP Jun 25 14:55:07.891668 containerd[1492]: time="2024-06-25T14:55:07.891635597Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:55:07.892744 containerd[1492]: time="2024-06-25T14:55:07.892700330Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"10915087\" in 2.530856782s" Jun 25 14:55:07.892892 containerd[1492]: time="2024-06-25T14:55:07.892861412Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\"" Jun 25 14:55:07.900485 containerd[1492]: time="2024-06-25T14:55:07.900377867Z" level=info msg="CreateContainer within sandbox \"9eacaaac223e458a6380f44d4f089ffc62a387128ca70ecbfdf36923c42e522f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jun 25 14:55:07.901930 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 14:55:07.902031 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali087f22294e6: link becomes ready Jun 25 14:55:07.903192 systemd-networkd[1255]: cali087f22294e6: Gained carrier Jun 25 14:55:07.914890 containerd[1492]: 2024-06-25 14:55:07.775 [INFO][4892] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3815.2.4--a--f605b45a38-k8s-calico--kube--controllers--564f6c74f7--tqbql-eth0 calico-kube-controllers-564f6c74f7- calico-system b14e02c4-5bcb-42cf-ac77-040a296222aa 855 0 2024-06-25 14:54:36 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:564f6c74f7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3815.2.4-a-f605b45a38 calico-kube-controllers-564f6c74f7-tqbql eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali087f22294e6 [] []}} ContainerID="0d56632a1b50f671cd70316de518dead39f2cd4243245821e11ba637dc6b5ba9" Namespace="calico-system" Pod="calico-kube-controllers-564f6c74f7-tqbql" WorkloadEndpoint="ci--3815.2.4--a--f605b45a38-k8s-calico--kube--controllers--564f6c74f7--tqbql-" Jun 25 14:55:07.914890 containerd[1492]: 2024-06-25 14:55:07.775 [INFO][4892] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0d56632a1b50f671cd70316de518dead39f2cd4243245821e11ba637dc6b5ba9" Namespace="calico-system" Pod="calico-kube-controllers-564f6c74f7-tqbql" WorkloadEndpoint="ci--3815.2.4--a--f605b45a38-k8s-calico--kube--controllers--564f6c74f7--tqbql-eth0" Jun 25 14:55:07.914890 containerd[1492]: 2024-06-25 14:55:07.832 [INFO][4903] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0d56632a1b50f671cd70316de518dead39f2cd4243245821e11ba637dc6b5ba9" HandleID="k8s-pod-network.0d56632a1b50f671cd70316de518dead39f2cd4243245821e11ba637dc6b5ba9" Workload="ci--3815.2.4--a--f605b45a38-k8s-calico--kube--controllers--564f6c74f7--tqbql-eth0" Jun 25 14:55:07.914890 containerd[1492]: 2024-06-25 14:55:07.845 [INFO][4903] ipam_plugin.go 264: Auto assigning IP ContainerID="0d56632a1b50f671cd70316de518dead39f2cd4243245821e11ba637dc6b5ba9" HandleID="k8s-pod-network.0d56632a1b50f671cd70316de518dead39f2cd4243245821e11ba637dc6b5ba9" Workload="ci--3815.2.4--a--f605b45a38-k8s-calico--kube--controllers--564f6c74f7--tqbql-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001fb2b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3815.2.4-a-f605b45a38", "pod":"calico-kube-controllers-564f6c74f7-tqbql", "timestamp":"2024-06-25 14:55:07.832275008 +0000 UTC"}, Hostname:"ci-3815.2.4-a-f605b45a38", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 14:55:07.914890 containerd[1492]: 2024-06-25 14:55:07.846 [INFO][4903] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:55:07.914890 containerd[1492]: 2024-06-25 14:55:07.846 [INFO][4903] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:55:07.914890 containerd[1492]: 2024-06-25 14:55:07.846 [INFO][4903] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3815.2.4-a-f605b45a38' Jun 25 14:55:07.914890 containerd[1492]: 2024-06-25 14:55:07.848 [INFO][4903] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0d56632a1b50f671cd70316de518dead39f2cd4243245821e11ba637dc6b5ba9" host="ci-3815.2.4-a-f605b45a38" Jun 25 14:55:07.914890 containerd[1492]: 2024-06-25 14:55:07.853 [INFO][4903] ipam.go 372: Looking up existing affinities for host host="ci-3815.2.4-a-f605b45a38" Jun 25 14:55:07.914890 containerd[1492]: 2024-06-25 14:55:07.858 [INFO][4903] ipam.go 489: Trying affinity for 192.168.19.64/26 host="ci-3815.2.4-a-f605b45a38" Jun 25 14:55:07.914890 containerd[1492]: 2024-06-25 14:55:07.860 [INFO][4903] ipam.go 155: Attempting to load block cidr=192.168.19.64/26 host="ci-3815.2.4-a-f605b45a38" Jun 25 14:55:07.914890 containerd[1492]: 2024-06-25 14:55:07.862 [INFO][4903] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.19.64/26 host="ci-3815.2.4-a-f605b45a38" Jun 25 14:55:07.914890 containerd[1492]: 2024-06-25 14:55:07.862 [INFO][4903] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.19.64/26 handle="k8s-pod-network.0d56632a1b50f671cd70316de518dead39f2cd4243245821e11ba637dc6b5ba9" host="ci-3815.2.4-a-f605b45a38" Jun 25 14:55:07.914890 containerd[1492]: 2024-06-25 14:55:07.866 [INFO][4903] ipam.go 1685: Creating new handle: k8s-pod-network.0d56632a1b50f671cd70316de518dead39f2cd4243245821e11ba637dc6b5ba9 Jun 25 14:55:07.914890 containerd[1492]: 2024-06-25 14:55:07.870 [INFO][4903] ipam.go 1203: Writing block in order to claim IPs block=192.168.19.64/26 handle="k8s-pod-network.0d56632a1b50f671cd70316de518dead39f2cd4243245821e11ba637dc6b5ba9" host="ci-3815.2.4-a-f605b45a38" Jun 25 14:55:07.914890 containerd[1492]: 2024-06-25 14:55:07.878 [INFO][4903] ipam.go 1216: Successfully claimed IPs: [192.168.19.68/26] block=192.168.19.64/26 handle="k8s-pod-network.0d56632a1b50f671cd70316de518dead39f2cd4243245821e11ba637dc6b5ba9" host="ci-3815.2.4-a-f605b45a38" Jun 25 14:55:07.914890 containerd[1492]: 2024-06-25 14:55:07.878 [INFO][4903] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.19.68/26] handle="k8s-pod-network.0d56632a1b50f671cd70316de518dead39f2cd4243245821e11ba637dc6b5ba9" host="ci-3815.2.4-a-f605b45a38" Jun 25 14:55:07.914890 containerd[1492]: 2024-06-25 14:55:07.878 [INFO][4903] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:55:07.914890 containerd[1492]: 2024-06-25 14:55:07.878 [INFO][4903] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.19.68/26] IPv6=[] ContainerID="0d56632a1b50f671cd70316de518dead39f2cd4243245821e11ba637dc6b5ba9" HandleID="k8s-pod-network.0d56632a1b50f671cd70316de518dead39f2cd4243245821e11ba637dc6b5ba9" Workload="ci--3815.2.4--a--f605b45a38-k8s-calico--kube--controllers--564f6c74f7--tqbql-eth0" Jun 25 14:55:07.915504 containerd[1492]: 2024-06-25 14:55:07.879 [INFO][4892] k8s.go 386: Populated endpoint ContainerID="0d56632a1b50f671cd70316de518dead39f2cd4243245821e11ba637dc6b5ba9" Namespace="calico-system" Pod="calico-kube-controllers-564f6c74f7-tqbql" WorkloadEndpoint="ci--3815.2.4--a--f605b45a38-k8s-calico--kube--controllers--564f6c74f7--tqbql-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--f605b45a38-k8s-calico--kube--controllers--564f6c74f7--tqbql-eth0", GenerateName:"calico-kube-controllers-564f6c74f7-", Namespace:"calico-system", SelfLink:"", UID:"b14e02c4-5bcb-42cf-ac77-040a296222aa", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 54, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"564f6c74f7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-f605b45a38", ContainerID:"", Pod:"calico-kube-controllers-564f6c74f7-tqbql", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.19.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali087f22294e6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:55:07.915504 containerd[1492]: 2024-06-25 14:55:07.879 [INFO][4892] k8s.go 387: Calico CNI using IPs: [192.168.19.68/32] ContainerID="0d56632a1b50f671cd70316de518dead39f2cd4243245821e11ba637dc6b5ba9" Namespace="calico-system" Pod="calico-kube-controllers-564f6c74f7-tqbql" WorkloadEndpoint="ci--3815.2.4--a--f605b45a38-k8s-calico--kube--controllers--564f6c74f7--tqbql-eth0" Jun 25 14:55:07.915504 containerd[1492]: 2024-06-25 14:55:07.879 [INFO][4892] dataplane_linux.go 68: Setting the host side veth name to cali087f22294e6 ContainerID="0d56632a1b50f671cd70316de518dead39f2cd4243245821e11ba637dc6b5ba9" Namespace="calico-system" Pod="calico-kube-controllers-564f6c74f7-tqbql" WorkloadEndpoint="ci--3815.2.4--a--f605b45a38-k8s-calico--kube--controllers--564f6c74f7--tqbql-eth0" Jun 25 14:55:07.915504 containerd[1492]: 2024-06-25 14:55:07.903 [INFO][4892] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="0d56632a1b50f671cd70316de518dead39f2cd4243245821e11ba637dc6b5ba9" Namespace="calico-system" Pod="calico-kube-controllers-564f6c74f7-tqbql" WorkloadEndpoint="ci--3815.2.4--a--f605b45a38-k8s-calico--kube--controllers--564f6c74f7--tqbql-eth0" Jun 25 14:55:07.915504 containerd[1492]: 2024-06-25 14:55:07.904 [INFO][4892] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0d56632a1b50f671cd70316de518dead39f2cd4243245821e11ba637dc6b5ba9" Namespace="calico-system" Pod="calico-kube-controllers-564f6c74f7-tqbql" WorkloadEndpoint="ci--3815.2.4--a--f605b45a38-k8s-calico--kube--controllers--564f6c74f7--tqbql-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--f605b45a38-k8s-calico--kube--controllers--564f6c74f7--tqbql-eth0", GenerateName:"calico-kube-controllers-564f6c74f7-", Namespace:"calico-system", SelfLink:"", UID:"b14e02c4-5bcb-42cf-ac77-040a296222aa", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 54, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"564f6c74f7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-f605b45a38", ContainerID:"0d56632a1b50f671cd70316de518dead39f2cd4243245821e11ba637dc6b5ba9", Pod:"calico-kube-controllers-564f6c74f7-tqbql", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.19.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali087f22294e6", MAC:"06:01:fc:20:4d:e1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:55:07.915504 containerd[1492]: 2024-06-25 14:55:07.912 [INFO][4892] k8s.go 500: Wrote updated endpoint to datastore ContainerID="0d56632a1b50f671cd70316de518dead39f2cd4243245821e11ba637dc6b5ba9" Namespace="calico-system" Pod="calico-kube-controllers-564f6c74f7-tqbql" WorkloadEndpoint="ci--3815.2.4--a--f605b45a38-k8s-calico--kube--controllers--564f6c74f7--tqbql-eth0" Jun 25 14:55:07.941000 audit[4934]: NETFILTER_CFG table=filter:121 family=2 entries=42 op=nft_register_chain pid=4934 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:55:07.945240 containerd[1492]: time="2024-06-25T14:55:07.944243140Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:55:07.945240 containerd[1492]: time="2024-06-25T14:55:07.944353902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:55:07.945240 containerd[1492]: time="2024-06-25T14:55:07.944385342Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:55:07.945240 containerd[1492]: time="2024-06-25T14:55:07.944429383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:55:07.946257 kernel: kauditd_printk_skb: 94 callbacks suppressed Jun 25 14:55:07.946357 kernel: audit: type=1325 audit(1719327307.941:608): table=filter:121 family=2 entries=42 op=nft_register_chain pid=4934 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:55:07.941000 audit[4934]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=21016 a0=3 a1=ffffc12d5230 a2=0 a3=ffffbc3abfa8 items=0 ppid=4188 pid=4934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:07.941000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:55:08.008145 kernel: audit: type=1300 audit(1719327307.941:608): arch=c00000b7 syscall=211 success=yes exit=21016 a0=3 a1=ffffc12d5230 a2=0 a3=ffffbc3abfa8 items=0 ppid=4188 pid=4934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:08.008279 kernel: audit: type=1327 audit(1719327307.941:608): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:55:08.017965 systemd[1]: Started cri-containerd-0d56632a1b50f671cd70316de518dead39f2cd4243245821e11ba637dc6b5ba9.scope - libcontainer container 0d56632a1b50f671cd70316de518dead39f2cd4243245821e11ba637dc6b5ba9. Jun 25 14:55:08.034000 audit: BPF prog-id=205 op=LOAD Jun 25 14:55:08.034000 audit: BPF prog-id=206 op=LOAD Jun 25 14:55:08.045895 kernel: audit: type=1334 audit(1719327308.034:609): prog-id=205 op=LOAD Jun 25 14:55:08.046024 kernel: audit: type=1334 audit(1719327308.034:610): prog-id=206 op=LOAD Jun 25 14:55:08.034000 audit[4944]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400010d8b0 a2=78 a3=0 items=0 ppid=4933 pid=4944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:08.070456 kernel: audit: type=1300 audit(1719327308.034:610): arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400010d8b0 a2=78 a3=0 items=0 ppid=4933 pid=4944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:08.034000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064353636333261316235306636373163643730333136646535313864 Jun 25 14:55:08.094232 kernel: audit: type=1327 audit(1719327308.034:610): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064353636333261316235306636373163643730333136646535313864 Jun 25 14:55:08.095230 containerd[1492]: time="2024-06-25T14:55:08.095189066Z" level=info msg="CreateContainer within sandbox \"9eacaaac223e458a6380f44d4f089ffc62a387128ca70ecbfdf36923c42e522f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"8a71fb4236af40b61311cab990ffaec558b22d5214c6008151af6097d4b9f313\"" Jun 25 14:55:08.096158 containerd[1492]: time="2024-06-25T14:55:08.096122478Z" level=info msg="StartContainer for \"8a71fb4236af40b61311cab990ffaec558b22d5214c6008151af6097d4b9f313\"" Jun 25 14:55:08.039000 audit: BPF prog-id=207 op=LOAD Jun 25 14:55:08.107689 kernel: audit: type=1334 audit(1719327308.039:611): prog-id=207 op=LOAD Jun 25 14:55:08.039000 audit[4944]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400010d640 a2=78 a3=0 items=0 ppid=4933 pid=4944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:08.135812 kernel: audit: type=1300 audit(1719327308.039:611): arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400010d640 a2=78 a3=0 items=0 ppid=4933 pid=4944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:08.039000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064353636333261316235306636373163643730333136646535313864 Jun 25 14:55:08.159117 kernel: audit: type=1327 audit(1719327308.039:611): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064353636333261316235306636373163643730333136646535313864 Jun 25 14:55:08.039000 audit: BPF prog-id=207 op=UNLOAD Jun 25 14:55:08.039000 audit: BPF prog-id=206 op=UNLOAD Jun 25 14:55:08.039000 audit: BPF prog-id=208 op=LOAD Jun 25 14:55:08.039000 audit[4944]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400010db10 a2=78 a3=0 items=0 ppid=4933 pid=4944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:08.039000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064353636333261316235306636373163643730333136646535313864 Jun 25 14:55:08.162195 systemd[1]: Started cri-containerd-8a71fb4236af40b61311cab990ffaec558b22d5214c6008151af6097d4b9f313.scope - libcontainer container 8a71fb4236af40b61311cab990ffaec558b22d5214c6008151af6097d4b9f313. Jun 25 14:55:08.164305 containerd[1492]: time="2024-06-25T14:55:08.164264644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-564f6c74f7-tqbql,Uid:b14e02c4-5bcb-42cf-ac77-040a296222aa,Namespace:calico-system,Attempt:1,} returns sandbox id \"0d56632a1b50f671cd70316de518dead39f2cd4243245821e11ba637dc6b5ba9\"" Jun 25 14:55:08.167232 containerd[1492]: time="2024-06-25T14:55:08.167157160Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jun 25 14:55:08.179000 audit: BPF prog-id=209 op=LOAD Jun 25 14:55:08.179000 audit[4973]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=40001338b0 a2=78 a3=0 items=0 ppid=4589 pid=4973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:08.179000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861373166623432333661663430623631333131636162393930666661 Jun 25 14:55:08.179000 audit: BPF prog-id=210 op=LOAD Jun 25 14:55:08.179000 audit[4973]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=4000133640 a2=78 a3=0 items=0 ppid=4589 pid=4973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:08.179000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861373166623432333661663430623631333131636162393930666661 Jun 25 14:55:08.179000 audit: BPF prog-id=210 op=UNLOAD Jun 25 14:55:08.179000 audit: BPF prog-id=209 op=UNLOAD Jun 25 14:55:08.179000 audit: BPF prog-id=211 op=LOAD Jun 25 14:55:08.179000 audit[4973]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=4000133b10 a2=78 a3=0 items=0 ppid=4589 pid=4973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:08.179000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861373166623432333661663430623631333131636162393930666661 Jun 25 14:55:08.219442 containerd[1492]: time="2024-06-25T14:55:08.219380449Z" level=info msg="StartContainer for \"8a71fb4236af40b61311cab990ffaec558b22d5214c6008151af6097d4b9f313\" returns successfully" Jun 25 14:55:08.624569 kubelet[2843]: I0625 14:55:08.624399 2843 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jun 25 14:55:08.624569 kubelet[2843]: I0625 14:55:08.624435 2843 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jun 25 14:55:08.692304 systemd[1]: run-containerd-runc-k8s.io-0d56632a1b50f671cd70316de518dead39f2cd4243245821e11ba637dc6b5ba9-runc.lpyddX.mount: Deactivated successfully. Jun 25 14:55:08.756679 kubelet[2843]: I0625 14:55:08.756324 2843 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-4gqsk" podStartSLOduration=29.773657799 podCreationTimestamp="2024-06-25 14:54:34 +0000 UTC" firstStartedPulling="2024-06-25 14:55:02.912116437 +0000 UTC m=+49.480273782" lastFinishedPulling="2024-06-25 14:55:07.894743876 +0000 UTC m=+54.462901181" observedRunningTime="2024-06-25 14:55:08.756268998 +0000 UTC m=+55.324426343" watchObservedRunningTime="2024-06-25 14:55:08.756285198 +0000 UTC m=+55.324442543" Jun 25 14:55:09.765962 systemd-networkd[1255]: cali087f22294e6: Gained IPv6LL Jun 25 14:55:09.981000 audit[2733]: AVC avc: denied { watch } for pid=2733 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=3646740 scontext=system_u:system_r:container_t:s0:c134,c352 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:55:09.981000 audit[2733]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=68 a1=400cc45d10 a2=fc6 a3=0 items=0 ppid=2546 pid=2733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c134,c352 key=(null) Jun 25 14:55:09.981000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E32302E3334002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 14:55:09.982000 audit[2733]: AVC avc: denied { watch } for pid=2733 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c134,c352 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:55:09.982000 audit[2733]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=68 a1=400ce85480 a2=fc6 a3=0 items=0 ppid=2546 pid=2733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c134,c352 key=(null) Jun 25 14:55:09.982000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E32302E3334002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 14:55:09.982000 audit[2733]: AVC avc: denied { watch } for pid=2733 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=3646736 scontext=system_u:system_r:container_t:s0:c134,c352 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:55:09.982000 audit[2733]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=68 a1=400c8ad2c0 a2=fc6 a3=0 items=0 ppid=2546 pid=2733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c134,c352 key=(null) Jun 25 14:55:09.982000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E32302E3334002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 14:55:09.987000 audit[2733]: AVC avc: denied { watch } for pid=2733 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=3646742 scontext=system_u:system_r:container_t:s0:c134,c352 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:55:09.987000 audit[2733]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=68 a1=400cc45ef0 a2=fc6 a3=0 items=0 ppid=2546 pid=2733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c134,c352 key=(null) Jun 25 14:55:09.987000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E32302E3334002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 14:55:10.020000 audit[2733]: AVC avc: denied { watch } for pid=2733 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c134,c352 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:55:10.020000 audit[2733]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=68 a1=400ce0b940 a2=fc6 a3=0 items=0 ppid=2546 pid=2733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c134,c352 key=(null) Jun 25 14:55:10.020000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E32302E3334002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 14:55:10.031000 audit[2733]: AVC avc: denied { watch } for pid=2733 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=3646740 scontext=system_u:system_r:container_t:s0:c134,c352 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:55:10.031000 audit[2733]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=68 a1=400ab8b0b0 a2=fc6 a3=0 items=0 ppid=2546 pid=2733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c134,c352 key=(null) Jun 25 14:55:10.031000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E32302E3334002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 14:55:10.730000 audit[2694]: AVC avc: denied { watch } for pid=2694 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c10,c345 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:55:10.730000 audit[2694]: AVC avc: denied { watch } for pid=2694 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=3646740 scontext=system_u:system_r:container_t:s0:c10,c345 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:55:10.730000 audit[2694]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=b a1=4001f055c0 a2=fc6 a3=0 items=0 ppid=2545 pid=2694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c10,c345 key=(null) Jun 25 14:55:10.730000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:55:10.730000 audit[2694]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=4000a93360 a2=fc6 a3=0 items=0 ppid=2545 pid=2694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c10,c345 key=(null) Jun 25 14:55:10.730000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:55:11.346397 containerd[1492]: time="2024-06-25T14:55:11.346348472Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:55:11.350003 containerd[1492]: time="2024-06-25T14:55:11.349945155Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=31361057" Jun 25 14:55:11.356715 containerd[1492]: time="2024-06-25T14:55:11.355949506Z" level=info msg="ImageCreate event name:\"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:55:11.361001 containerd[1492]: time="2024-06-25T14:55:11.360949245Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:55:11.366575 containerd[1492]: time="2024-06-25T14:55:11.366500631Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:55:11.367998 containerd[1492]: time="2024-06-25T14:55:11.367940768Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"32727593\" in 3.200710087s" Jun 25 14:55:11.367998 containerd[1492]: time="2024-06-25T14:55:11.367995409Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\"" Jun 25 14:55:11.381987 containerd[1492]: time="2024-06-25T14:55:11.381939655Z" level=info msg="CreateContainer within sandbox \"0d56632a1b50f671cd70316de518dead39f2cd4243245821e11ba637dc6b5ba9\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jun 25 14:55:11.426587 containerd[1492]: time="2024-06-25T14:55:11.426532824Z" level=info msg="CreateContainer within sandbox \"0d56632a1b50f671cd70316de518dead39f2cd4243245821e11ba637dc6b5ba9\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"c18e0067a66301b341060c63335cd979b8fea934046d43b19d5a30fa4f3247f5\"" Jun 25 14:55:11.431492 containerd[1492]: time="2024-06-25T14:55:11.431452083Z" level=info msg="StartContainer for \"c18e0067a66301b341060c63335cd979b8fea934046d43b19d5a30fa4f3247f5\"" Jun 25 14:55:11.463963 systemd[1]: Started cri-containerd-c18e0067a66301b341060c63335cd979b8fea934046d43b19d5a30fa4f3247f5.scope - libcontainer container c18e0067a66301b341060c63335cd979b8fea934046d43b19d5a30fa4f3247f5. Jun 25 14:55:11.476000 audit: BPF prog-id=212 op=LOAD Jun 25 14:55:11.476000 audit: BPF prog-id=213 op=LOAD Jun 25 14:55:11.476000 audit[5030]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400010d8b0 a2=78 a3=0 items=0 ppid=4933 pid=5030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:11.476000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331386530303637613636333031623334313036306336333333356364 Jun 25 14:55:11.476000 audit: BPF prog-id=214 op=LOAD Jun 25 14:55:11.476000 audit[5030]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400010d640 a2=78 a3=0 items=0 ppid=4933 pid=5030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:11.476000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331386530303637613636333031623334313036306336333333356364 Jun 25 14:55:11.476000 audit: BPF prog-id=214 op=UNLOAD Jun 25 14:55:11.476000 audit: BPF prog-id=213 op=UNLOAD Jun 25 14:55:11.476000 audit: BPF prog-id=215 op=LOAD Jun 25 14:55:11.476000 audit[5030]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400010db10 a2=78 a3=0 items=0 ppid=4933 pid=5030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:11.476000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331386530303637613636333031623334313036306336333333356364 Jun 25 14:55:11.525054 containerd[1492]: time="2024-06-25T14:55:11.525002914Z" level=info msg="StartContainer for \"c18e0067a66301b341060c63335cd979b8fea934046d43b19d5a30fa4f3247f5\" returns successfully" Jun 25 14:55:11.792152 kubelet[2843]: I0625 14:55:11.792107 2843 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-564f6c74f7-tqbql" podStartSLOduration=32.59053871 podCreationTimestamp="2024-06-25 14:54:36 +0000 UTC" firstStartedPulling="2024-06-25 14:55:08.166763675 +0000 UTC m=+54.734921020" lastFinishedPulling="2024-06-25 14:55:11.368283532 +0000 UTC m=+57.936440837" observedRunningTime="2024-06-25 14:55:11.790697151 +0000 UTC m=+58.358854496" watchObservedRunningTime="2024-06-25 14:55:11.792058527 +0000 UTC m=+58.360215872" Jun 25 14:55:13.547909 containerd[1492]: time="2024-06-25T14:55:13.547868394Z" level=info msg="StopPodSandbox for \"51bd1b9328699dcf9216804ea054d5a23e55aad230c965553508486924fe18c2\"" Jun 25 14:55:13.649128 containerd[1492]: 2024-06-25 14:55:13.602 [WARNING][5097] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="51bd1b9328699dcf9216804ea054d5a23e55aad230c965553508486924fe18c2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--f605b45a38-k8s-coredns--5dd5756b68--qpwfn-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"9e6a8668-a98b-4401-b848-2bc30cd2cac6", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 54, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-f605b45a38", ContainerID:"ec35da7eaeced14dee9dbe0eafd6974e7711835742dfb4a4d95f322b93f39981", Pod:"coredns-5dd5756b68-qpwfn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4e5d32d7d00", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:55:13.649128 containerd[1492]: 2024-06-25 14:55:13.603 [INFO][5097] k8s.go 608: Cleaning up netns ContainerID="51bd1b9328699dcf9216804ea054d5a23e55aad230c965553508486924fe18c2" Jun 25 14:55:13.649128 containerd[1492]: 2024-06-25 14:55:13.603 [INFO][5097] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="51bd1b9328699dcf9216804ea054d5a23e55aad230c965553508486924fe18c2" iface="eth0" netns="" Jun 25 14:55:13.649128 containerd[1492]: 2024-06-25 14:55:13.603 [INFO][5097] k8s.go 615: Releasing IP address(es) ContainerID="51bd1b9328699dcf9216804ea054d5a23e55aad230c965553508486924fe18c2" Jun 25 14:55:13.649128 containerd[1492]: 2024-06-25 14:55:13.603 [INFO][5097] utils.go 188: Calico CNI releasing IP address ContainerID="51bd1b9328699dcf9216804ea054d5a23e55aad230c965553508486924fe18c2" Jun 25 14:55:13.649128 containerd[1492]: 2024-06-25 14:55:13.632 [INFO][5103] ipam_plugin.go 411: Releasing address using handleID ContainerID="51bd1b9328699dcf9216804ea054d5a23e55aad230c965553508486924fe18c2" HandleID="k8s-pod-network.51bd1b9328699dcf9216804ea054d5a23e55aad230c965553508486924fe18c2" Workload="ci--3815.2.4--a--f605b45a38-k8s-coredns--5dd5756b68--qpwfn-eth0" Jun 25 14:55:13.649128 containerd[1492]: 2024-06-25 14:55:13.632 [INFO][5103] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:55:13.649128 containerd[1492]: 2024-06-25 14:55:13.632 [INFO][5103] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:55:13.649128 containerd[1492]: 2024-06-25 14:55:13.644 [WARNING][5103] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="51bd1b9328699dcf9216804ea054d5a23e55aad230c965553508486924fe18c2" HandleID="k8s-pod-network.51bd1b9328699dcf9216804ea054d5a23e55aad230c965553508486924fe18c2" Workload="ci--3815.2.4--a--f605b45a38-k8s-coredns--5dd5756b68--qpwfn-eth0" Jun 25 14:55:13.649128 containerd[1492]: 2024-06-25 14:55:13.644 [INFO][5103] ipam_plugin.go 439: Releasing address using workloadID ContainerID="51bd1b9328699dcf9216804ea054d5a23e55aad230c965553508486924fe18c2" HandleID="k8s-pod-network.51bd1b9328699dcf9216804ea054d5a23e55aad230c965553508486924fe18c2" Workload="ci--3815.2.4--a--f605b45a38-k8s-coredns--5dd5756b68--qpwfn-eth0" Jun 25 14:55:13.649128 containerd[1492]: 2024-06-25 14:55:13.646 [INFO][5103] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:55:13.649128 containerd[1492]: 2024-06-25 14:55:13.647 [INFO][5097] k8s.go 621: Teardown processing complete. ContainerID="51bd1b9328699dcf9216804ea054d5a23e55aad230c965553508486924fe18c2" Jun 25 14:55:13.649607 containerd[1492]: time="2024-06-25T14:55:13.649168844Z" level=info msg="TearDown network for sandbox \"51bd1b9328699dcf9216804ea054d5a23e55aad230c965553508486924fe18c2\" successfully" Jun 25 14:55:13.649607 containerd[1492]: time="2024-06-25T14:55:13.649197484Z" level=info msg="StopPodSandbox for \"51bd1b9328699dcf9216804ea054d5a23e55aad230c965553508486924fe18c2\" returns successfully" Jun 25 14:55:13.654074 containerd[1492]: time="2024-06-25T14:55:13.654021060Z" level=info msg="RemovePodSandbox for \"51bd1b9328699dcf9216804ea054d5a23e55aad230c965553508486924fe18c2\"" Jun 25 14:55:13.660286 containerd[1492]: time="2024-06-25T14:55:13.654064421Z" level=info msg="Forcibly stopping sandbox \"51bd1b9328699dcf9216804ea054d5a23e55aad230c965553508486924fe18c2\"" Jun 25 14:55:13.769234 containerd[1492]: 2024-06-25 14:55:13.720 [WARNING][5123] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="51bd1b9328699dcf9216804ea054d5a23e55aad230c965553508486924fe18c2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--f605b45a38-k8s-coredns--5dd5756b68--qpwfn-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"9e6a8668-a98b-4401-b848-2bc30cd2cac6", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 54, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-f605b45a38", ContainerID:"ec35da7eaeced14dee9dbe0eafd6974e7711835742dfb4a4d95f322b93f39981", Pod:"coredns-5dd5756b68-qpwfn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4e5d32d7d00", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:55:13.769234 containerd[1492]: 2024-06-25 14:55:13.720 [INFO][5123] k8s.go 608: Cleaning up netns ContainerID="51bd1b9328699dcf9216804ea054d5a23e55aad230c965553508486924fe18c2" Jun 25 14:55:13.769234 containerd[1492]: 2024-06-25 14:55:13.721 [INFO][5123] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="51bd1b9328699dcf9216804ea054d5a23e55aad230c965553508486924fe18c2" iface="eth0" netns="" Jun 25 14:55:13.769234 containerd[1492]: 2024-06-25 14:55:13.721 [INFO][5123] k8s.go 615: Releasing IP address(es) ContainerID="51bd1b9328699dcf9216804ea054d5a23e55aad230c965553508486924fe18c2" Jun 25 14:55:13.769234 containerd[1492]: 2024-06-25 14:55:13.721 [INFO][5123] utils.go 188: Calico CNI releasing IP address ContainerID="51bd1b9328699dcf9216804ea054d5a23e55aad230c965553508486924fe18c2" Jun 25 14:55:13.769234 containerd[1492]: 2024-06-25 14:55:13.750 [INFO][5131] ipam_plugin.go 411: Releasing address using handleID ContainerID="51bd1b9328699dcf9216804ea054d5a23e55aad230c965553508486924fe18c2" HandleID="k8s-pod-network.51bd1b9328699dcf9216804ea054d5a23e55aad230c965553508486924fe18c2" Workload="ci--3815.2.4--a--f605b45a38-k8s-coredns--5dd5756b68--qpwfn-eth0" Jun 25 14:55:13.769234 containerd[1492]: 2024-06-25 14:55:13.750 [INFO][5131] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:55:13.769234 containerd[1492]: 2024-06-25 14:55:13.750 [INFO][5131] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:55:13.769234 containerd[1492]: 2024-06-25 14:55:13.763 [WARNING][5131] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="51bd1b9328699dcf9216804ea054d5a23e55aad230c965553508486924fe18c2" HandleID="k8s-pod-network.51bd1b9328699dcf9216804ea054d5a23e55aad230c965553508486924fe18c2" Workload="ci--3815.2.4--a--f605b45a38-k8s-coredns--5dd5756b68--qpwfn-eth0" Jun 25 14:55:13.769234 containerd[1492]: 2024-06-25 14:55:13.763 [INFO][5131] ipam_plugin.go 439: Releasing address using workloadID ContainerID="51bd1b9328699dcf9216804ea054d5a23e55aad230c965553508486924fe18c2" HandleID="k8s-pod-network.51bd1b9328699dcf9216804ea054d5a23e55aad230c965553508486924fe18c2" Workload="ci--3815.2.4--a--f605b45a38-k8s-coredns--5dd5756b68--qpwfn-eth0" Jun 25 14:55:13.769234 containerd[1492]: 2024-06-25 14:55:13.765 [INFO][5131] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:55:13.769234 containerd[1492]: 2024-06-25 14:55:13.768 [INFO][5123] k8s.go 621: Teardown processing complete. ContainerID="51bd1b9328699dcf9216804ea054d5a23e55aad230c965553508486924fe18c2" Jun 25 14:55:13.769711 containerd[1492]: time="2024-06-25T14:55:13.769262951Z" level=info msg="TearDown network for sandbox \"51bd1b9328699dcf9216804ea054d5a23e55aad230c965553508486924fe18c2\" successfully" Jun 25 14:55:14.279654 containerd[1492]: time="2024-06-25T14:55:14.279598999Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"51bd1b9328699dcf9216804ea054d5a23e55aad230c965553508486924fe18c2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 14:55:14.279868 containerd[1492]: time="2024-06-25T14:55:14.279695800Z" level=info msg="RemovePodSandbox \"51bd1b9328699dcf9216804ea054d5a23e55aad230c965553508486924fe18c2\" returns successfully" Jun 25 14:55:14.280283 containerd[1492]: time="2024-06-25T14:55:14.280251767Z" level=info msg="StopPodSandbox for \"4e8a848d35adfd56c1cfb1a1d11cce7803182bc19d9a4a3c3cc3844ddba929dd\"" Jun 25 14:55:14.280400 containerd[1492]: time="2024-06-25T14:55:14.280351288Z" level=info msg="TearDown network for sandbox \"4e8a848d35adfd56c1cfb1a1d11cce7803182bc19d9a4a3c3cc3844ddba929dd\" successfully" Jun 25 14:55:14.280434 containerd[1492]: time="2024-06-25T14:55:14.280396768Z" level=info msg="StopPodSandbox for \"4e8a848d35adfd56c1cfb1a1d11cce7803182bc19d9a4a3c3cc3844ddba929dd\" returns successfully" Jun 25 14:55:14.280715 containerd[1492]: time="2024-06-25T14:55:14.280685332Z" level=info msg="RemovePodSandbox for \"4e8a848d35adfd56c1cfb1a1d11cce7803182bc19d9a4a3c3cc3844ddba929dd\"" Jun 25 14:55:14.280750 containerd[1492]: time="2024-06-25T14:55:14.280717372Z" level=info msg="Forcibly stopping sandbox \"4e8a848d35adfd56c1cfb1a1d11cce7803182bc19d9a4a3c3cc3844ddba929dd\"" Jun 25 14:55:14.280820 containerd[1492]: time="2024-06-25T14:55:14.280778773Z" level=info msg="TearDown network for sandbox \"4e8a848d35adfd56c1cfb1a1d11cce7803182bc19d9a4a3c3cc3844ddba929dd\" successfully" Jun 25 14:55:14.289326 containerd[1492]: time="2024-06-25T14:55:14.289282989Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4e8a848d35adfd56c1cfb1a1d11cce7803182bc19d9a4a3c3cc3844ddba929dd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 14:55:14.289444 containerd[1492]: time="2024-06-25T14:55:14.289348110Z" level=info msg="RemovePodSandbox \"4e8a848d35adfd56c1cfb1a1d11cce7803182bc19d9a4a3c3cc3844ddba929dd\" returns successfully" Jun 25 14:55:14.289716 containerd[1492]: time="2024-06-25T14:55:14.289688234Z" level=info msg="StopPodSandbox for \"24e620276a7978c883268a4c1afcf43a204a57123fa0cddeaaa65c4657152f7b\"" Jun 25 14:55:14.375625 containerd[1492]: 2024-06-25 14:55:14.328 [WARNING][5149] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="24e620276a7978c883268a4c1afcf43a204a57123fa0cddeaaa65c4657152f7b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--f605b45a38-k8s-coredns--5dd5756b68--rjfmh-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"0989afc6-f75f-4830-b87f-2ccfc1afc269", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 54, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-f605b45a38", ContainerID:"71dc0af7dc4eb62666a8e8752cd135164c02662caa909f8c81e319113e215602", Pod:"coredns-5dd5756b68-rjfmh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali68f7c8aa45e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:55:14.375625 containerd[1492]: 2024-06-25 14:55:14.328 [INFO][5149] k8s.go 608: Cleaning up netns ContainerID="24e620276a7978c883268a4c1afcf43a204a57123fa0cddeaaa65c4657152f7b" Jun 25 14:55:14.375625 containerd[1492]: 2024-06-25 14:55:14.328 [INFO][5149] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="24e620276a7978c883268a4c1afcf43a204a57123fa0cddeaaa65c4657152f7b" iface="eth0" netns="" Jun 25 14:55:14.375625 containerd[1492]: 2024-06-25 14:55:14.328 [INFO][5149] k8s.go 615: Releasing IP address(es) ContainerID="24e620276a7978c883268a4c1afcf43a204a57123fa0cddeaaa65c4657152f7b" Jun 25 14:55:14.375625 containerd[1492]: 2024-06-25 14:55:14.328 [INFO][5149] utils.go 188: Calico CNI releasing IP address ContainerID="24e620276a7978c883268a4c1afcf43a204a57123fa0cddeaaa65c4657152f7b" Jun 25 14:55:14.375625 containerd[1492]: 2024-06-25 14:55:14.362 [INFO][5156] ipam_plugin.go 411: Releasing address using handleID ContainerID="24e620276a7978c883268a4c1afcf43a204a57123fa0cddeaaa65c4657152f7b" HandleID="k8s-pod-network.24e620276a7978c883268a4c1afcf43a204a57123fa0cddeaaa65c4657152f7b" Workload="ci--3815.2.4--a--f605b45a38-k8s-coredns--5dd5756b68--rjfmh-eth0" Jun 25 14:55:14.375625 containerd[1492]: 2024-06-25 14:55:14.362 [INFO][5156] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:55:14.375625 containerd[1492]: 2024-06-25 14:55:14.362 [INFO][5156] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:55:14.375625 containerd[1492]: 2024-06-25 14:55:14.370 [WARNING][5156] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="24e620276a7978c883268a4c1afcf43a204a57123fa0cddeaaa65c4657152f7b" HandleID="k8s-pod-network.24e620276a7978c883268a4c1afcf43a204a57123fa0cddeaaa65c4657152f7b" Workload="ci--3815.2.4--a--f605b45a38-k8s-coredns--5dd5756b68--rjfmh-eth0" Jun 25 14:55:14.375625 containerd[1492]: 2024-06-25 14:55:14.370 [INFO][5156] ipam_plugin.go 439: Releasing address using workloadID ContainerID="24e620276a7978c883268a4c1afcf43a204a57123fa0cddeaaa65c4657152f7b" HandleID="k8s-pod-network.24e620276a7978c883268a4c1afcf43a204a57123fa0cddeaaa65c4657152f7b" Workload="ci--3815.2.4--a--f605b45a38-k8s-coredns--5dd5756b68--rjfmh-eth0" Jun 25 14:55:14.375625 containerd[1492]: 2024-06-25 14:55:14.372 [INFO][5156] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:55:14.375625 containerd[1492]: 2024-06-25 14:55:14.374 [INFO][5149] k8s.go 621: Teardown processing complete. ContainerID="24e620276a7978c883268a4c1afcf43a204a57123fa0cddeaaa65c4657152f7b" Jun 25 14:55:14.376177 containerd[1492]: time="2024-06-25T14:55:14.375918496Z" level=info msg="TearDown network for sandbox \"24e620276a7978c883268a4c1afcf43a204a57123fa0cddeaaa65c4657152f7b\" successfully" Jun 25 14:55:14.376177 containerd[1492]: time="2024-06-25T14:55:14.375963617Z" level=info msg="StopPodSandbox for \"24e620276a7978c883268a4c1afcf43a204a57123fa0cddeaaa65c4657152f7b\" returns successfully" Jun 25 14:55:14.376546 containerd[1492]: time="2024-06-25T14:55:14.376520223Z" level=info msg="RemovePodSandbox for \"24e620276a7978c883268a4c1afcf43a204a57123fa0cddeaaa65c4657152f7b\"" Jun 25 14:55:14.376681 containerd[1492]: time="2024-06-25T14:55:14.376639824Z" level=info msg="Forcibly stopping sandbox \"24e620276a7978c883268a4c1afcf43a204a57123fa0cddeaaa65c4657152f7b\"" Jun 25 14:55:14.447564 containerd[1492]: 2024-06-25 14:55:14.411 [WARNING][5175] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="24e620276a7978c883268a4c1afcf43a204a57123fa0cddeaaa65c4657152f7b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--f605b45a38-k8s-coredns--5dd5756b68--rjfmh-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"0989afc6-f75f-4830-b87f-2ccfc1afc269", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 54, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-f605b45a38", ContainerID:"71dc0af7dc4eb62666a8e8752cd135164c02662caa909f8c81e319113e215602", Pod:"coredns-5dd5756b68-rjfmh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali68f7c8aa45e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:55:14.447564 containerd[1492]: 2024-06-25 14:55:14.412 [INFO][5175] k8s.go 608: Cleaning up netns ContainerID="24e620276a7978c883268a4c1afcf43a204a57123fa0cddeaaa65c4657152f7b" Jun 25 14:55:14.447564 containerd[1492]: 2024-06-25 14:55:14.412 [INFO][5175] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="24e620276a7978c883268a4c1afcf43a204a57123fa0cddeaaa65c4657152f7b" iface="eth0" netns="" Jun 25 14:55:14.447564 containerd[1492]: 2024-06-25 14:55:14.412 [INFO][5175] k8s.go 615: Releasing IP address(es) ContainerID="24e620276a7978c883268a4c1afcf43a204a57123fa0cddeaaa65c4657152f7b" Jun 25 14:55:14.447564 containerd[1492]: 2024-06-25 14:55:14.412 [INFO][5175] utils.go 188: Calico CNI releasing IP address ContainerID="24e620276a7978c883268a4c1afcf43a204a57123fa0cddeaaa65c4657152f7b" Jun 25 14:55:14.447564 containerd[1492]: 2024-06-25 14:55:14.433 [INFO][5181] ipam_plugin.go 411: Releasing address using handleID ContainerID="24e620276a7978c883268a4c1afcf43a204a57123fa0cddeaaa65c4657152f7b" HandleID="k8s-pod-network.24e620276a7978c883268a4c1afcf43a204a57123fa0cddeaaa65c4657152f7b" Workload="ci--3815.2.4--a--f605b45a38-k8s-coredns--5dd5756b68--rjfmh-eth0" Jun 25 14:55:14.447564 containerd[1492]: 2024-06-25 14:55:14.434 [INFO][5181] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:55:14.447564 containerd[1492]: 2024-06-25 14:55:14.434 [INFO][5181] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:55:14.447564 containerd[1492]: 2024-06-25 14:55:14.443 [WARNING][5181] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="24e620276a7978c883268a4c1afcf43a204a57123fa0cddeaaa65c4657152f7b" HandleID="k8s-pod-network.24e620276a7978c883268a4c1afcf43a204a57123fa0cddeaaa65c4657152f7b" Workload="ci--3815.2.4--a--f605b45a38-k8s-coredns--5dd5756b68--rjfmh-eth0" Jun 25 14:55:14.447564 containerd[1492]: 2024-06-25 14:55:14.443 [INFO][5181] ipam_plugin.go 439: Releasing address using workloadID ContainerID="24e620276a7978c883268a4c1afcf43a204a57123fa0cddeaaa65c4657152f7b" HandleID="k8s-pod-network.24e620276a7978c883268a4c1afcf43a204a57123fa0cddeaaa65c4657152f7b" Workload="ci--3815.2.4--a--f605b45a38-k8s-coredns--5dd5756b68--rjfmh-eth0" Jun 25 14:55:14.447564 containerd[1492]: 2024-06-25 14:55:14.445 [INFO][5181] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:55:14.447564 containerd[1492]: 2024-06-25 14:55:14.446 [INFO][5175] k8s.go 621: Teardown processing complete. ContainerID="24e620276a7978c883268a4c1afcf43a204a57123fa0cddeaaa65c4657152f7b" Jun 25 14:55:14.448111 containerd[1492]: time="2024-06-25T14:55:14.448077518Z" level=info msg="TearDown network for sandbox \"24e620276a7978c883268a4c1afcf43a204a57123fa0cddeaaa65c4657152f7b\" successfully" Jun 25 14:55:14.458814 containerd[1492]: time="2024-06-25T14:55:14.458756439Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"24e620276a7978c883268a4c1afcf43a204a57123fa0cddeaaa65c4657152f7b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 14:55:14.458930 containerd[1492]: time="2024-06-25T14:55:14.458850560Z" level=info msg="RemovePodSandbox \"24e620276a7978c883268a4c1afcf43a204a57123fa0cddeaaa65c4657152f7b\" returns successfully" Jun 25 14:55:14.459368 containerd[1492]: time="2024-06-25T14:55:14.459335206Z" level=info msg="StopPodSandbox for \"2395c577f603e4ce910c9aa27b3eca77eb862719d068cb58779890d3973e5ca3\"" Jun 25 14:55:14.533953 containerd[1492]: 2024-06-25 14:55:14.494 [WARNING][5199] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2395c577f603e4ce910c9aa27b3eca77eb862719d068cb58779890d3973e5ca3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--f605b45a38-k8s-csi--node--driver--4gqsk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"abc80f9d-37c5-4a3d-984d-c970bd8ec106", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 54, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-f605b45a38", ContainerID:"9eacaaac223e458a6380f44d4f089ffc62a387128ca70ecbfdf36923c42e522f", Pod:"csi-node-driver-4gqsk", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.19.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calib42109d96f2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:55:14.533953 containerd[1492]: 2024-06-25 14:55:14.494 [INFO][5199] k8s.go 608: Cleaning up netns ContainerID="2395c577f603e4ce910c9aa27b3eca77eb862719d068cb58779890d3973e5ca3" Jun 25 14:55:14.533953 containerd[1492]: 2024-06-25 14:55:14.494 [INFO][5199] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="2395c577f603e4ce910c9aa27b3eca77eb862719d068cb58779890d3973e5ca3" iface="eth0" netns="" Jun 25 14:55:14.533953 containerd[1492]: 2024-06-25 14:55:14.494 [INFO][5199] k8s.go 615: Releasing IP address(es) ContainerID="2395c577f603e4ce910c9aa27b3eca77eb862719d068cb58779890d3973e5ca3" Jun 25 14:55:14.533953 containerd[1492]: 2024-06-25 14:55:14.494 [INFO][5199] utils.go 188: Calico CNI releasing IP address ContainerID="2395c577f603e4ce910c9aa27b3eca77eb862719d068cb58779890d3973e5ca3" Jun 25 14:55:14.533953 containerd[1492]: 2024-06-25 14:55:14.520 [INFO][5206] ipam_plugin.go 411: Releasing address using handleID ContainerID="2395c577f603e4ce910c9aa27b3eca77eb862719d068cb58779890d3973e5ca3" HandleID="k8s-pod-network.2395c577f603e4ce910c9aa27b3eca77eb862719d068cb58779890d3973e5ca3" Workload="ci--3815.2.4--a--f605b45a38-k8s-csi--node--driver--4gqsk-eth0" Jun 25 14:55:14.533953 containerd[1492]: 2024-06-25 14:55:14.521 [INFO][5206] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:55:14.533953 containerd[1492]: 2024-06-25 14:55:14.521 [INFO][5206] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:55:14.533953 containerd[1492]: 2024-06-25 14:55:14.529 [WARNING][5206] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="2395c577f603e4ce910c9aa27b3eca77eb862719d068cb58779890d3973e5ca3" HandleID="k8s-pod-network.2395c577f603e4ce910c9aa27b3eca77eb862719d068cb58779890d3973e5ca3" Workload="ci--3815.2.4--a--f605b45a38-k8s-csi--node--driver--4gqsk-eth0" Jun 25 14:55:14.533953 containerd[1492]: 2024-06-25 14:55:14.529 [INFO][5206] ipam_plugin.go 439: Releasing address using workloadID ContainerID="2395c577f603e4ce910c9aa27b3eca77eb862719d068cb58779890d3973e5ca3" HandleID="k8s-pod-network.2395c577f603e4ce910c9aa27b3eca77eb862719d068cb58779890d3973e5ca3" Workload="ci--3815.2.4--a--f605b45a38-k8s-csi--node--driver--4gqsk-eth0" Jun 25 14:55:14.533953 containerd[1492]: 2024-06-25 14:55:14.530 [INFO][5206] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:55:14.533953 containerd[1492]: 2024-06-25 14:55:14.531 [INFO][5199] k8s.go 621: Teardown processing complete. ContainerID="2395c577f603e4ce910c9aa27b3eca77eb862719d068cb58779890d3973e5ca3" Jun 25 14:55:14.533953 containerd[1492]: time="2024-06-25T14:55:14.533519051Z" level=info msg="TearDown network for sandbox \"2395c577f603e4ce910c9aa27b3eca77eb862719d068cb58779890d3973e5ca3\" successfully" Jun 25 14:55:14.533953 containerd[1492]: time="2024-06-25T14:55:14.533563211Z" level=info msg="StopPodSandbox for \"2395c577f603e4ce910c9aa27b3eca77eb862719d068cb58779890d3973e5ca3\" returns successfully" Jun 25 14:55:14.534659 containerd[1492]: time="2024-06-25T14:55:14.534633303Z" level=info msg="RemovePodSandbox for \"2395c577f603e4ce910c9aa27b3eca77eb862719d068cb58779890d3973e5ca3\"" Jun 25 14:55:14.534984 containerd[1492]: time="2024-06-25T14:55:14.534882866Z" level=info msg="Forcibly stopping sandbox \"2395c577f603e4ce910c9aa27b3eca77eb862719d068cb58779890d3973e5ca3\"" Jun 25 14:55:14.640973 kernel: kauditd_printk_skb: 52 callbacks suppressed Jun 25 14:55:14.641128 kernel: audit: type=1325 audit(1719327314.624:634): table=filter:122 family=2 entries=9 op=nft_register_rule pid=5237 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:55:14.624000 audit[5237]: NETFILTER_CFG table=filter:122 family=2 entries=9 op=nft_register_rule pid=5237 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:55:14.645987 kubelet[2843]: I0625 14:55:14.645110 2843 topology_manager.go:215] "Topology Admit Handler" podUID="3c92aeb7-eec8-42f6-bd8d-0c33350f754e" podNamespace="calico-apiserver" podName="calico-apiserver-6db9d89946-gqwhg" Jun 25 14:55:14.651092 systemd[1]: Created slice kubepods-besteffort-pod3c92aeb7_eec8_42f6_bd8d_0c33350f754e.slice - libcontainer container kubepods-besteffort-pod3c92aeb7_eec8_42f6_bd8d_0c33350f754e.slice. Jun 25 14:55:14.658951 containerd[1492]: 2024-06-25 14:55:14.584 [WARNING][5224] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2395c577f603e4ce910c9aa27b3eca77eb862719d068cb58779890d3973e5ca3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--f605b45a38-k8s-csi--node--driver--4gqsk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"abc80f9d-37c5-4a3d-984d-c970bd8ec106", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 54, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-f605b45a38", ContainerID:"9eacaaac223e458a6380f44d4f089ffc62a387128ca70ecbfdf36923c42e522f", Pod:"csi-node-driver-4gqsk", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.19.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calib42109d96f2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:55:14.658951 containerd[1492]: 2024-06-25 14:55:14.585 [INFO][5224] k8s.go 608: Cleaning up netns ContainerID="2395c577f603e4ce910c9aa27b3eca77eb862719d068cb58779890d3973e5ca3" Jun 25 14:55:14.658951 containerd[1492]: 2024-06-25 14:55:14.585 [INFO][5224] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="2395c577f603e4ce910c9aa27b3eca77eb862719d068cb58779890d3973e5ca3" iface="eth0" netns="" Jun 25 14:55:14.658951 containerd[1492]: 2024-06-25 14:55:14.585 [INFO][5224] k8s.go 615: Releasing IP address(es) ContainerID="2395c577f603e4ce910c9aa27b3eca77eb862719d068cb58779890d3973e5ca3" Jun 25 14:55:14.658951 containerd[1492]: 2024-06-25 14:55:14.585 [INFO][5224] utils.go 188: Calico CNI releasing IP address ContainerID="2395c577f603e4ce910c9aa27b3eca77eb862719d068cb58779890d3973e5ca3" Jun 25 14:55:14.658951 containerd[1492]: 2024-06-25 14:55:14.610 [INFO][5230] ipam_plugin.go 411: Releasing address using handleID ContainerID="2395c577f603e4ce910c9aa27b3eca77eb862719d068cb58779890d3973e5ca3" HandleID="k8s-pod-network.2395c577f603e4ce910c9aa27b3eca77eb862719d068cb58779890d3973e5ca3" Workload="ci--3815.2.4--a--f605b45a38-k8s-csi--node--driver--4gqsk-eth0" Jun 25 14:55:14.658951 containerd[1492]: 2024-06-25 14:55:14.610 [INFO][5230] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:55:14.658951 containerd[1492]: 2024-06-25 14:55:14.610 [INFO][5230] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:55:14.658951 containerd[1492]: 2024-06-25 14:55:14.642 [WARNING][5230] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="2395c577f603e4ce910c9aa27b3eca77eb862719d068cb58779890d3973e5ca3" HandleID="k8s-pod-network.2395c577f603e4ce910c9aa27b3eca77eb862719d068cb58779890d3973e5ca3" Workload="ci--3815.2.4--a--f605b45a38-k8s-csi--node--driver--4gqsk-eth0" Jun 25 14:55:14.658951 containerd[1492]: 2024-06-25 14:55:14.642 [INFO][5230] ipam_plugin.go 439: Releasing address using workloadID ContainerID="2395c577f603e4ce910c9aa27b3eca77eb862719d068cb58779890d3973e5ca3" HandleID="k8s-pod-network.2395c577f603e4ce910c9aa27b3eca77eb862719d068cb58779890d3973e5ca3" Workload="ci--3815.2.4--a--f605b45a38-k8s-csi--node--driver--4gqsk-eth0" Jun 25 14:55:14.658951 containerd[1492]: 2024-06-25 14:55:14.655 [INFO][5230] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:55:14.658951 containerd[1492]: 2024-06-25 14:55:14.657 [INFO][5224] k8s.go 621: Teardown processing complete. ContainerID="2395c577f603e4ce910c9aa27b3eca77eb862719d068cb58779890d3973e5ca3" Jun 25 14:55:14.659579 containerd[1492]: time="2024-06-25T14:55:14.659546046Z" level=info msg="TearDown network for sandbox \"2395c577f603e4ce910c9aa27b3eca77eb862719d068cb58779890d3973e5ca3\" successfully" Jun 25 14:55:14.624000 audit[5237]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3676 a0=3 a1=ffffd9b783b0 a2=0 a3=1 items=0 ppid=3025 pid=5237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:14.670150 kubelet[2843]: I0625 14:55:14.670112 2843 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3c92aeb7-eec8-42f6-bd8d-0c33350f754e-calico-apiserver-certs\") pod \"calico-apiserver-6db9d89946-gqwhg\" (UID: \"3c92aeb7-eec8-42f6-bd8d-0c33350f754e\") " pod="calico-apiserver/calico-apiserver-6db9d89946-gqwhg" Jun 25 14:55:14.670341 kubelet[2843]: I0625 14:55:14.670328 2843 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvznq\" (UniqueName: \"kubernetes.io/projected/3c92aeb7-eec8-42f6-bd8d-0c33350f754e-kube-api-access-gvznq\") pod \"calico-apiserver-6db9d89946-gqwhg\" (UID: \"3c92aeb7-eec8-42f6-bd8d-0c33350f754e\") " pod="calico-apiserver/calico-apiserver-6db9d89946-gqwhg" Jun 25 14:55:14.690972 kubelet[2843]: I0625 14:55:14.690946 2843 topology_manager.go:215] "Topology Admit Handler" podUID="9d519ab2-cfe3-4d75-b329-d0089a6b05a3" podNamespace="calico-apiserver" podName="calico-apiserver-6db9d89946-btghg" Jun 25 14:55:14.624000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:55:14.705589 kernel: audit: type=1300 audit(1719327314.624:634): arch=c00000b7 syscall=211 success=yes exit=3676 a0=3 a1=ffffd9b783b0 a2=0 a3=1 items=0 ppid=3025 pid=5237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:14.705692 kernel: audit: type=1327 audit(1719327314.624:634): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:55:14.706708 containerd[1492]: time="2024-06-25T14:55:14.706653662Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2395c577f603e4ce910c9aa27b3eca77eb862719d068cb58779890d3973e5ca3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 14:55:14.706949 containerd[1492]: time="2024-06-25T14:55:14.706922465Z" level=info msg="RemovePodSandbox \"2395c577f603e4ce910c9aa27b3eca77eb862719d068cb58779890d3973e5ca3\" returns successfully" Jun 25 14:55:14.708333 containerd[1492]: time="2024-06-25T14:55:14.708306561Z" level=info msg="StopPodSandbox for \"05a8cce5095b65af45f81a457c805b87d43a566601ef22c8aacc03d409028e68\"" Jun 25 14:55:14.708613 containerd[1492]: time="2024-06-25T14:55:14.708567884Z" level=info msg="TearDown network for sandbox \"05a8cce5095b65af45f81a457c805b87d43a566601ef22c8aacc03d409028e68\" successfully" Jun 25 14:55:14.709535 systemd[1]: Created slice kubepods-besteffort-pod9d519ab2_cfe3_4d75_b329_d0089a6b05a3.slice - libcontainer container kubepods-besteffort-pod9d519ab2_cfe3_4d75_b329_d0089a6b05a3.slice. Jun 25 14:55:14.709927 containerd[1492]: time="2024-06-25T14:55:14.709902299Z" level=info msg="StopPodSandbox for \"05a8cce5095b65af45f81a457c805b87d43a566601ef22c8aacc03d409028e68\" returns successfully" Jun 25 14:55:14.713284 containerd[1492]: time="2024-06-25T14:55:14.712833133Z" level=info msg="RemovePodSandbox for \"05a8cce5095b65af45f81a457c805b87d43a566601ef22c8aacc03d409028e68\"" Jun 25 14:55:14.713427 containerd[1492]: time="2024-06-25T14:55:14.713382379Z" level=info msg="Forcibly stopping sandbox \"05a8cce5095b65af45f81a457c805b87d43a566601ef22c8aacc03d409028e68\"" Jun 25 14:55:14.713545 containerd[1492]: time="2024-06-25T14:55:14.713526301Z" level=info msg="TearDown network for sandbox \"05a8cce5095b65af45f81a457c805b87d43a566601ef22c8aacc03d409028e68\" successfully" Jun 25 14:55:14.663000 audit[5237]: NETFILTER_CFG table=nat:123 family=2 entries=20 op=nft_register_rule pid=5237 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:55:14.663000 audit[5237]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffd9b783b0 a2=0 a3=1 items=0 ppid=3025 pid=5237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:14.752223 kernel: audit: type=1325 audit(1719327314.663:635): table=nat:123 family=2 entries=20 op=nft_register_rule pid=5237 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:55:14.752342 kernel: audit: type=1300 audit(1719327314.663:635): arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffd9b783b0 a2=0 a3=1 items=0 ppid=3025 pid=5237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:14.663000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:55:14.766147 kernel: audit: type=1327 audit(1719327314.663:635): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:55:14.754000 audit[5240]: NETFILTER_CFG table=filter:124 family=2 entries=10 op=nft_register_rule pid=5240 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:55:14.770682 kubelet[2843]: I0625 14:55:14.770660 2843 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9d519ab2-cfe3-4d75-b329-d0089a6b05a3-calico-apiserver-certs\") pod \"calico-apiserver-6db9d89946-btghg\" (UID: \"9d519ab2-cfe3-4d75-b329-d0089a6b05a3\") " pod="calico-apiserver/calico-apiserver-6db9d89946-btghg" Jun 25 14:55:14.771064 kubelet[2843]: I0625 14:55:14.771050 2843 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwgpg\" (UniqueName: \"kubernetes.io/projected/9d519ab2-cfe3-4d75-b329-d0089a6b05a3-kube-api-access-vwgpg\") pod \"calico-apiserver-6db9d89946-btghg\" (UID: \"9d519ab2-cfe3-4d75-b329-d0089a6b05a3\") " pod="calico-apiserver/calico-apiserver-6db9d89946-btghg" Jun 25 14:55:14.771701 kubelet[2843]: E0625 14:55:14.771677 2843 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jun 25 14:55:14.779679 kernel: audit: type=1325 audit(1719327314.754:636): table=filter:124 family=2 entries=10 op=nft_register_rule pid=5240 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:55:14.779763 kernel: audit: type=1300 audit(1719327314.754:636): arch=c00000b7 syscall=211 success=yes exit=3676 a0=3 a1=ffffc3761cf0 a2=0 a3=1 items=0 ppid=3025 pid=5240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:14.754000 audit[5240]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3676 a0=3 a1=ffffc3761cf0 a2=0 a3=1 items=0 ppid=3025 pid=5240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:14.780024 containerd[1492]: time="2024-06-25T14:55:14.779993017Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"05a8cce5095b65af45f81a457c805b87d43a566601ef22c8aacc03d409028e68\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 14:55:14.780161 containerd[1492]: time="2024-06-25T14:55:14.780139739Z" level=info msg="RemovePodSandbox \"05a8cce5095b65af45f81a457c805b87d43a566601ef22c8aacc03d409028e68\" returns successfully" Jun 25 14:55:14.754000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:55:14.815600 kernel: audit: type=1327 audit(1719327314.754:636): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:55:14.760000 audit[5240]: NETFILTER_CFG table=nat:125 family=2 entries=20 op=nft_register_rule pid=5240 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:55:14.828152 kernel: audit: type=1325 audit(1719327314.760:637): table=nat:125 family=2 entries=20 op=nft_register_rule pid=5240 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:55:14.760000 audit[5240]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffc3761cf0 a2=0 a3=1 items=0 ppid=3025 pid=5240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:14.760000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:55:14.833649 kubelet[2843]: E0625 14:55:14.833624 2843 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3c92aeb7-eec8-42f6-bd8d-0c33350f754e-calico-apiserver-certs podName:3c92aeb7-eec8-42f6-bd8d-0c33350f754e nodeName:}" failed. No retries permitted until 2024-06-25 14:55:15.333594988 +0000 UTC m=+61.901752333 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/3c92aeb7-eec8-42f6-bd8d-0c33350f754e-calico-apiserver-certs") pod "calico-apiserver-6db9d89946-gqwhg" (UID: "3c92aeb7-eec8-42f6-bd8d-0c33350f754e") : secret "calico-apiserver-certs" not found Jun 25 14:55:14.845486 containerd[1492]: time="2024-06-25T14:55:14.845117439Z" level=info msg="StopPodSandbox for \"0c57418be5c2e7fd0275ef2e8991ff70718ee7a00a8e8c5f36033d0092c61ef5\"" Jun 25 14:55:14.871941 kubelet[2843]: E0625 14:55:14.871901 2843 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jun 25 14:55:14.872090 kubelet[2843]: E0625 14:55:14.871965 2843 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d519ab2-cfe3-4d75-b329-d0089a6b05a3-calico-apiserver-certs podName:9d519ab2-cfe3-4d75-b329-d0089a6b05a3 nodeName:}" failed. No retries permitted until 2024-06-25 14:55:15.371950345 +0000 UTC m=+61.940107690 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/9d519ab2-cfe3-4d75-b329-d0089a6b05a3-calico-apiserver-certs") pod "calico-apiserver-6db9d89946-btghg" (UID: "9d519ab2-cfe3-4d75-b329-d0089a6b05a3") : secret "calico-apiserver-certs" not found Jun 25 14:55:14.945739 containerd[1492]: 2024-06-25 14:55:14.907 [WARNING][5255] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0c57418be5c2e7fd0275ef2e8991ff70718ee7a00a8e8c5f36033d0092c61ef5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--f605b45a38-k8s-calico--kube--controllers--564f6c74f7--tqbql-eth0", GenerateName:"calico-kube-controllers-564f6c74f7-", Namespace:"calico-system", SelfLink:"", UID:"b14e02c4-5bcb-42cf-ac77-040a296222aa", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 54, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"564f6c74f7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-f605b45a38", ContainerID:"0d56632a1b50f671cd70316de518dead39f2cd4243245821e11ba637dc6b5ba9", Pod:"calico-kube-controllers-564f6c74f7-tqbql", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.19.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali087f22294e6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:55:14.945739 containerd[1492]: 2024-06-25 14:55:14.907 [INFO][5255] k8s.go 608: Cleaning up netns ContainerID="0c57418be5c2e7fd0275ef2e8991ff70718ee7a00a8e8c5f36033d0092c61ef5" Jun 25 14:55:14.945739 containerd[1492]: 2024-06-25 14:55:14.907 [INFO][5255] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="0c57418be5c2e7fd0275ef2e8991ff70718ee7a00a8e8c5f36033d0092c61ef5" iface="eth0" netns="" Jun 25 14:55:14.945739 containerd[1492]: 2024-06-25 14:55:14.907 [INFO][5255] k8s.go 615: Releasing IP address(es) ContainerID="0c57418be5c2e7fd0275ef2e8991ff70718ee7a00a8e8c5f36033d0092c61ef5" Jun 25 14:55:14.945739 containerd[1492]: 2024-06-25 14:55:14.907 [INFO][5255] utils.go 188: Calico CNI releasing IP address ContainerID="0c57418be5c2e7fd0275ef2e8991ff70718ee7a00a8e8c5f36033d0092c61ef5" Jun 25 14:55:14.945739 containerd[1492]: 2024-06-25 14:55:14.933 [INFO][5262] ipam_plugin.go 411: Releasing address using handleID ContainerID="0c57418be5c2e7fd0275ef2e8991ff70718ee7a00a8e8c5f36033d0092c61ef5" HandleID="k8s-pod-network.0c57418be5c2e7fd0275ef2e8991ff70718ee7a00a8e8c5f36033d0092c61ef5" Workload="ci--3815.2.4--a--f605b45a38-k8s-calico--kube--controllers--564f6c74f7--tqbql-eth0" Jun 25 14:55:14.945739 containerd[1492]: 2024-06-25 14:55:14.933 [INFO][5262] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:55:14.945739 containerd[1492]: 2024-06-25 14:55:14.933 [INFO][5262] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:55:14.945739 containerd[1492]: 2024-06-25 14:55:14.941 [WARNING][5262] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="0c57418be5c2e7fd0275ef2e8991ff70718ee7a00a8e8c5f36033d0092c61ef5" HandleID="k8s-pod-network.0c57418be5c2e7fd0275ef2e8991ff70718ee7a00a8e8c5f36033d0092c61ef5" Workload="ci--3815.2.4--a--f605b45a38-k8s-calico--kube--controllers--564f6c74f7--tqbql-eth0" Jun 25 14:55:14.945739 containerd[1492]: 2024-06-25 14:55:14.941 [INFO][5262] ipam_plugin.go 439: Releasing address using workloadID ContainerID="0c57418be5c2e7fd0275ef2e8991ff70718ee7a00a8e8c5f36033d0092c61ef5" HandleID="k8s-pod-network.0c57418be5c2e7fd0275ef2e8991ff70718ee7a00a8e8c5f36033d0092c61ef5" Workload="ci--3815.2.4--a--f605b45a38-k8s-calico--kube--controllers--564f6c74f7--tqbql-eth0" Jun 25 14:55:14.945739 containerd[1492]: 2024-06-25 14:55:14.943 [INFO][5262] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:55:14.945739 containerd[1492]: 2024-06-25 14:55:14.944 [INFO][5255] k8s.go 621: Teardown processing complete. ContainerID="0c57418be5c2e7fd0275ef2e8991ff70718ee7a00a8e8c5f36033d0092c61ef5" Jun 25 14:55:14.946326 containerd[1492]: time="2024-06-25T14:55:14.946276751Z" level=info msg="TearDown network for sandbox \"0c57418be5c2e7fd0275ef2e8991ff70718ee7a00a8e8c5f36033d0092c61ef5\" successfully" Jun 25 14:55:14.946403 containerd[1492]: time="2024-06-25T14:55:14.946387672Z" level=info msg="StopPodSandbox for \"0c57418be5c2e7fd0275ef2e8991ff70718ee7a00a8e8c5f36033d0092c61ef5\" returns successfully" Jun 25 14:55:14.947001 containerd[1492]: time="2024-06-25T14:55:14.946975919Z" level=info msg="RemovePodSandbox for \"0c57418be5c2e7fd0275ef2e8991ff70718ee7a00a8e8c5f36033d0092c61ef5\"" Jun 25 14:55:14.947208 containerd[1492]: time="2024-06-25T14:55:14.947151201Z" level=info msg="Forcibly stopping sandbox \"0c57418be5c2e7fd0275ef2e8991ff70718ee7a00a8e8c5f36033d0092c61ef5\"" Jun 25 14:55:15.024457 containerd[1492]: 2024-06-25 14:55:14.984 [WARNING][5281] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0c57418be5c2e7fd0275ef2e8991ff70718ee7a00a8e8c5f36033d0092c61ef5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--f605b45a38-k8s-calico--kube--controllers--564f6c74f7--tqbql-eth0", GenerateName:"calico-kube-controllers-564f6c74f7-", Namespace:"calico-system", SelfLink:"", UID:"b14e02c4-5bcb-42cf-ac77-040a296222aa", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 54, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"564f6c74f7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-f605b45a38", ContainerID:"0d56632a1b50f671cd70316de518dead39f2cd4243245821e11ba637dc6b5ba9", Pod:"calico-kube-controllers-564f6c74f7-tqbql", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.19.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali087f22294e6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:55:15.024457 containerd[1492]: 2024-06-25 14:55:14.984 [INFO][5281] k8s.go 608: Cleaning up netns ContainerID="0c57418be5c2e7fd0275ef2e8991ff70718ee7a00a8e8c5f36033d0092c61ef5" Jun 25 14:55:15.024457 containerd[1492]: 2024-06-25 14:55:14.984 [INFO][5281] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="0c57418be5c2e7fd0275ef2e8991ff70718ee7a00a8e8c5f36033d0092c61ef5" iface="eth0" netns="" Jun 25 14:55:15.024457 containerd[1492]: 2024-06-25 14:55:14.985 [INFO][5281] k8s.go 615: Releasing IP address(es) ContainerID="0c57418be5c2e7fd0275ef2e8991ff70718ee7a00a8e8c5f36033d0092c61ef5" Jun 25 14:55:15.024457 containerd[1492]: 2024-06-25 14:55:14.985 [INFO][5281] utils.go 188: Calico CNI releasing IP address ContainerID="0c57418be5c2e7fd0275ef2e8991ff70718ee7a00a8e8c5f36033d0092c61ef5" Jun 25 14:55:15.024457 containerd[1492]: 2024-06-25 14:55:15.007 [INFO][5287] ipam_plugin.go 411: Releasing address using handleID ContainerID="0c57418be5c2e7fd0275ef2e8991ff70718ee7a00a8e8c5f36033d0092c61ef5" HandleID="k8s-pod-network.0c57418be5c2e7fd0275ef2e8991ff70718ee7a00a8e8c5f36033d0092c61ef5" Workload="ci--3815.2.4--a--f605b45a38-k8s-calico--kube--controllers--564f6c74f7--tqbql-eth0" Jun 25 14:55:15.024457 containerd[1492]: 2024-06-25 14:55:15.007 [INFO][5287] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:55:15.024457 containerd[1492]: 2024-06-25 14:55:15.007 [INFO][5287] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:55:15.024457 containerd[1492]: 2024-06-25 14:55:15.020 [WARNING][5287] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="0c57418be5c2e7fd0275ef2e8991ff70718ee7a00a8e8c5f36033d0092c61ef5" HandleID="k8s-pod-network.0c57418be5c2e7fd0275ef2e8991ff70718ee7a00a8e8c5f36033d0092c61ef5" Workload="ci--3815.2.4--a--f605b45a38-k8s-calico--kube--controllers--564f6c74f7--tqbql-eth0" Jun 25 14:55:15.024457 containerd[1492]: 2024-06-25 14:55:15.020 [INFO][5287] ipam_plugin.go 439: Releasing address using workloadID ContainerID="0c57418be5c2e7fd0275ef2e8991ff70718ee7a00a8e8c5f36033d0092c61ef5" HandleID="k8s-pod-network.0c57418be5c2e7fd0275ef2e8991ff70718ee7a00a8e8c5f36033d0092c61ef5" Workload="ci--3815.2.4--a--f605b45a38-k8s-calico--kube--controllers--564f6c74f7--tqbql-eth0" Jun 25 14:55:15.024457 containerd[1492]: 2024-06-25 14:55:15.021 [INFO][5287] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:55:15.024457 containerd[1492]: 2024-06-25 14:55:15.023 [INFO][5281] k8s.go 621: Teardown processing complete. ContainerID="0c57418be5c2e7fd0275ef2e8991ff70718ee7a00a8e8c5f36033d0092c61ef5" Jun 25 14:55:15.025072 containerd[1492]: time="2024-06-25T14:55:15.025035084Z" level=info msg="TearDown network for sandbox \"0c57418be5c2e7fd0275ef2e8991ff70718ee7a00a8e8c5f36033d0092c61ef5\" successfully" Jun 25 14:55:15.032524 containerd[1492]: time="2024-06-25T14:55:15.032487768Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0c57418be5c2e7fd0275ef2e8991ff70718ee7a00a8e8c5f36033d0092c61ef5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 14:55:15.032704 containerd[1492]: time="2024-06-25T14:55:15.032681410Z" level=info msg="RemovePodSandbox \"0c57418be5c2e7fd0275ef2e8991ff70718ee7a00a8e8c5f36033d0092c61ef5\" returns successfully" Jun 25 14:55:15.557290 containerd[1492]: time="2024-06-25T14:55:15.557243503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6db9d89946-gqwhg,Uid:3c92aeb7-eec8-42f6-bd8d-0c33350f754e,Namespace:calico-apiserver,Attempt:0,}" Jun 25 14:55:15.616645 containerd[1492]: time="2024-06-25T14:55:15.616602410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6db9d89946-btghg,Uid:9d519ab2-cfe3-4d75-b329-d0089a6b05a3,Namespace:calico-apiserver,Attempt:0,}" Jun 25 14:55:15.712099 systemd-networkd[1255]: calie7a70f54dfa: Link UP Jun 25 14:55:15.721934 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 14:55:15.722043 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calie7a70f54dfa: link becomes ready Jun 25 14:55:15.723929 systemd-networkd[1255]: calie7a70f54dfa: Gained carrier Jun 25 14:55:15.741980 containerd[1492]: 2024-06-25 14:55:15.626 [INFO][5295] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3815.2.4--a--f605b45a38-k8s-calico--apiserver--6db9d89946--gqwhg-eth0 calico-apiserver-6db9d89946- calico-apiserver 3c92aeb7-eec8-42f6-bd8d-0c33350f754e 938 0 2024-06-25 14:55:14 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6db9d89946 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3815.2.4-a-f605b45a38 calico-apiserver-6db9d89946-gqwhg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie7a70f54dfa [] []}} ContainerID="cd2a20b8b37b63292825142f998e22878c62af68bf92b08f3a03db5cc682ad9e" Namespace="calico-apiserver" Pod="calico-apiserver-6db9d89946-gqwhg" WorkloadEndpoint="ci--3815.2.4--a--f605b45a38-k8s-calico--apiserver--6db9d89946--gqwhg-" Jun 25 14:55:15.741980 containerd[1492]: 2024-06-25 14:55:15.626 [INFO][5295] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="cd2a20b8b37b63292825142f998e22878c62af68bf92b08f3a03db5cc682ad9e" Namespace="calico-apiserver" Pod="calico-apiserver-6db9d89946-gqwhg" WorkloadEndpoint="ci--3815.2.4--a--f605b45a38-k8s-calico--apiserver--6db9d89946--gqwhg-eth0" Jun 25 14:55:15.741980 containerd[1492]: 2024-06-25 14:55:15.659 [INFO][5307] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cd2a20b8b37b63292825142f998e22878c62af68bf92b08f3a03db5cc682ad9e" HandleID="k8s-pod-network.cd2a20b8b37b63292825142f998e22878c62af68bf92b08f3a03db5cc682ad9e" Workload="ci--3815.2.4--a--f605b45a38-k8s-calico--apiserver--6db9d89946--gqwhg-eth0" Jun 25 14:55:15.741980 containerd[1492]: 2024-06-25 14:55:15.673 [INFO][5307] ipam_plugin.go 264: Auto assigning IP ContainerID="cd2a20b8b37b63292825142f998e22878c62af68bf92b08f3a03db5cc682ad9e" HandleID="k8s-pod-network.cd2a20b8b37b63292825142f998e22878c62af68bf92b08f3a03db5cc682ad9e" Workload="ci--3815.2.4--a--f605b45a38-k8s-calico--apiserver--6db9d89946--gqwhg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003182f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3815.2.4-a-f605b45a38", "pod":"calico-apiserver-6db9d89946-gqwhg", "timestamp":"2024-06-25 14:55:15.659631373 +0000 UTC"}, Hostname:"ci-3815.2.4-a-f605b45a38", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 14:55:15.741980 containerd[1492]: 2024-06-25 14:55:15.673 [INFO][5307] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:55:15.741980 containerd[1492]: 2024-06-25 14:55:15.673 [INFO][5307] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:55:15.741980 containerd[1492]: 2024-06-25 14:55:15.673 [INFO][5307] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3815.2.4-a-f605b45a38' Jun 25 14:55:15.741980 containerd[1492]: 2024-06-25 14:55:15.675 [INFO][5307] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.cd2a20b8b37b63292825142f998e22878c62af68bf92b08f3a03db5cc682ad9e" host="ci-3815.2.4-a-f605b45a38" Jun 25 14:55:15.741980 containerd[1492]: 2024-06-25 14:55:15.680 [INFO][5307] ipam.go 372: Looking up existing affinities for host host="ci-3815.2.4-a-f605b45a38" Jun 25 14:55:15.741980 containerd[1492]: 2024-06-25 14:55:15.684 [INFO][5307] ipam.go 489: Trying affinity for 192.168.19.64/26 host="ci-3815.2.4-a-f605b45a38" Jun 25 14:55:15.741980 containerd[1492]: 2024-06-25 14:55:15.686 [INFO][5307] ipam.go 155: Attempting to load block cidr=192.168.19.64/26 host="ci-3815.2.4-a-f605b45a38" Jun 25 14:55:15.741980 containerd[1492]: 2024-06-25 14:55:15.688 [INFO][5307] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.19.64/26 host="ci-3815.2.4-a-f605b45a38" Jun 25 14:55:15.741980 containerd[1492]: 2024-06-25 14:55:15.688 [INFO][5307] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.19.64/26 handle="k8s-pod-network.cd2a20b8b37b63292825142f998e22878c62af68bf92b08f3a03db5cc682ad9e" host="ci-3815.2.4-a-f605b45a38" Jun 25 14:55:15.741980 containerd[1492]: 2024-06-25 14:55:15.690 [INFO][5307] ipam.go 1685: Creating new handle: k8s-pod-network.cd2a20b8b37b63292825142f998e22878c62af68bf92b08f3a03db5cc682ad9e Jun 25 14:55:15.741980 containerd[1492]: 2024-06-25 14:55:15.694 [INFO][5307] ipam.go 1203: Writing block in order to claim IPs block=192.168.19.64/26 handle="k8s-pod-network.cd2a20b8b37b63292825142f998e22878c62af68bf92b08f3a03db5cc682ad9e" host="ci-3815.2.4-a-f605b45a38" Jun 25 14:55:15.741980 containerd[1492]: 2024-06-25 14:55:15.702 [INFO][5307] ipam.go 1216: Successfully claimed IPs: [192.168.19.69/26] block=192.168.19.64/26 handle="k8s-pod-network.cd2a20b8b37b63292825142f998e22878c62af68bf92b08f3a03db5cc682ad9e" host="ci-3815.2.4-a-f605b45a38" Jun 25 14:55:15.741980 containerd[1492]: 2024-06-25 14:55:15.702 [INFO][5307] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.19.69/26] handle="k8s-pod-network.cd2a20b8b37b63292825142f998e22878c62af68bf92b08f3a03db5cc682ad9e" host="ci-3815.2.4-a-f605b45a38" Jun 25 14:55:15.741980 containerd[1492]: 2024-06-25 14:55:15.702 [INFO][5307] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:55:15.741980 containerd[1492]: 2024-06-25 14:55:15.702 [INFO][5307] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.19.69/26] IPv6=[] ContainerID="cd2a20b8b37b63292825142f998e22878c62af68bf92b08f3a03db5cc682ad9e" HandleID="k8s-pod-network.cd2a20b8b37b63292825142f998e22878c62af68bf92b08f3a03db5cc682ad9e" Workload="ci--3815.2.4--a--f605b45a38-k8s-calico--apiserver--6db9d89946--gqwhg-eth0" Jun 25 14:55:15.742744 containerd[1492]: 2024-06-25 14:55:15.708 [INFO][5295] k8s.go 386: Populated endpoint ContainerID="cd2a20b8b37b63292825142f998e22878c62af68bf92b08f3a03db5cc682ad9e" Namespace="calico-apiserver" Pod="calico-apiserver-6db9d89946-gqwhg" WorkloadEndpoint="ci--3815.2.4--a--f605b45a38-k8s-calico--apiserver--6db9d89946--gqwhg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--f605b45a38-k8s-calico--apiserver--6db9d89946--gqwhg-eth0", GenerateName:"calico-apiserver-6db9d89946-", Namespace:"calico-apiserver", SelfLink:"", UID:"3c92aeb7-eec8-42f6-bd8d-0c33350f754e", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 55, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6db9d89946", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-f605b45a38", ContainerID:"", Pod:"calico-apiserver-6db9d89946-gqwhg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie7a70f54dfa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:55:15.742744 containerd[1492]: 2024-06-25 14:55:15.708 [INFO][5295] k8s.go 387: Calico CNI using IPs: [192.168.19.69/32] ContainerID="cd2a20b8b37b63292825142f998e22878c62af68bf92b08f3a03db5cc682ad9e" Namespace="calico-apiserver" Pod="calico-apiserver-6db9d89946-gqwhg" WorkloadEndpoint="ci--3815.2.4--a--f605b45a38-k8s-calico--apiserver--6db9d89946--gqwhg-eth0" Jun 25 14:55:15.742744 containerd[1492]: 2024-06-25 14:55:15.708 [INFO][5295] dataplane_linux.go 68: Setting the host side veth name to calie7a70f54dfa ContainerID="cd2a20b8b37b63292825142f998e22878c62af68bf92b08f3a03db5cc682ad9e" Namespace="calico-apiserver" Pod="calico-apiserver-6db9d89946-gqwhg" WorkloadEndpoint="ci--3815.2.4--a--f605b45a38-k8s-calico--apiserver--6db9d89946--gqwhg-eth0" Jun 25 14:55:15.742744 containerd[1492]: 2024-06-25 14:55:15.725 [INFO][5295] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="cd2a20b8b37b63292825142f998e22878c62af68bf92b08f3a03db5cc682ad9e" Namespace="calico-apiserver" Pod="calico-apiserver-6db9d89946-gqwhg" WorkloadEndpoint="ci--3815.2.4--a--f605b45a38-k8s-calico--apiserver--6db9d89946--gqwhg-eth0" Jun 25 14:55:15.742744 containerd[1492]: 2024-06-25 14:55:15.725 [INFO][5295] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="cd2a20b8b37b63292825142f998e22878c62af68bf92b08f3a03db5cc682ad9e" Namespace="calico-apiserver" Pod="calico-apiserver-6db9d89946-gqwhg" WorkloadEndpoint="ci--3815.2.4--a--f605b45a38-k8s-calico--apiserver--6db9d89946--gqwhg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--f605b45a38-k8s-calico--apiserver--6db9d89946--gqwhg-eth0", GenerateName:"calico-apiserver-6db9d89946-", Namespace:"calico-apiserver", SelfLink:"", UID:"3c92aeb7-eec8-42f6-bd8d-0c33350f754e", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 55, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6db9d89946", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-f605b45a38", ContainerID:"cd2a20b8b37b63292825142f998e22878c62af68bf92b08f3a03db5cc682ad9e", Pod:"calico-apiserver-6db9d89946-gqwhg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie7a70f54dfa", MAC:"5a:47:e0:6b:29:b7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:55:15.742744 containerd[1492]: 2024-06-25 14:55:15.739 [INFO][5295] k8s.go 500: Wrote updated endpoint to datastore ContainerID="cd2a20b8b37b63292825142f998e22878c62af68bf92b08f3a03db5cc682ad9e" Namespace="calico-apiserver" Pod="calico-apiserver-6db9d89946-gqwhg" WorkloadEndpoint="ci--3815.2.4--a--f605b45a38-k8s-calico--apiserver--6db9d89946--gqwhg-eth0" Jun 25 14:55:15.774000 audit[5359]: NETFILTER_CFG table=filter:126 family=2 entries=55 op=nft_register_chain pid=5359 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:55:15.774000 audit[5359]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=27464 a0=3 a1=ffffe4ed8c00 a2=0 a3=ffff981e7fa8 items=0 ppid=4188 pid=5359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:15.774000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:55:15.783898 containerd[1492]: time="2024-06-25T14:55:15.783511924Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:55:15.783898 containerd[1492]: time="2024-06-25T14:55:15.783574885Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:55:15.783898 containerd[1492]: time="2024-06-25T14:55:15.783610246Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:55:15.783898 containerd[1492]: time="2024-06-25T14:55:15.783624566Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:55:15.802003 systemd[1]: Started cri-containerd-cd2a20b8b37b63292825142f998e22878c62af68bf92b08f3a03db5cc682ad9e.scope - libcontainer container cd2a20b8b37b63292825142f998e22878c62af68bf92b08f3a03db5cc682ad9e. Jun 25 14:55:15.829000 audit: BPF prog-id=216 op=LOAD Jun 25 14:55:15.830000 audit: BPF prog-id=217 op=LOAD Jun 25 14:55:15.830000 audit[5366]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001a98b0 a2=78 a3=0 items=0 ppid=5354 pid=5366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:15.830000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364326132306238623337623633323932383235313432663939386532 Jun 25 14:55:15.830000 audit: BPF prog-id=218 op=LOAD Jun 25 14:55:15.830000 audit[5366]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=40001a9640 a2=78 a3=0 items=0 ppid=5354 pid=5366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:15.830000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364326132306238623337623633323932383235313432663939386532 Jun 25 14:55:15.830000 audit: BPF prog-id=218 op=UNLOAD Jun 25 14:55:15.830000 audit: BPF prog-id=217 op=UNLOAD Jun 25 14:55:15.830000 audit: BPF prog-id=219 op=LOAD Jun 25 14:55:15.830000 audit[5366]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001a9b10 a2=78 a3=0 items=0 ppid=5354 pid=5366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:15.830000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364326132306238623337623633323932383235313432663939386532 Jun 25 14:55:15.869625 systemd-networkd[1255]: cali30fdc386370: Link UP Jun 25 14:55:15.884860 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali30fdc386370: link becomes ready Jun 25 14:55:15.883866 systemd-networkd[1255]: cali30fdc386370: Gained carrier Jun 25 14:55:15.888530 containerd[1492]: time="2024-06-25T14:55:15.888476424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6db9d89946-gqwhg,Uid:3c92aeb7-eec8-42f6-bd8d-0c33350f754e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"cd2a20b8b37b63292825142f998e22878c62af68bf92b08f3a03db5cc682ad9e\"" Jun 25 14:55:15.891730 containerd[1492]: time="2024-06-25T14:55:15.891685380Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jun 25 14:55:15.895353 containerd[1492]: 2024-06-25 14:55:15.729 [INFO][5313] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3815.2.4--a--f605b45a38-k8s-calico--apiserver--6db9d89946--btghg-eth0 calico-apiserver-6db9d89946- calico-apiserver 9d519ab2-cfe3-4d75-b329-d0089a6b05a3 941 0 2024-06-25 14:55:14 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6db9d89946 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3815.2.4-a-f605b45a38 calico-apiserver-6db9d89946-btghg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali30fdc386370 [] []}} ContainerID="cb2aad28d62f097e27f522e695c41bfc524809e94d29d0fbe5f112612a21f28e" Namespace="calico-apiserver" Pod="calico-apiserver-6db9d89946-btghg" WorkloadEndpoint="ci--3815.2.4--a--f605b45a38-k8s-calico--apiserver--6db9d89946--btghg-" Jun 25 14:55:15.895353 containerd[1492]: 2024-06-25 14:55:15.729 [INFO][5313] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="cb2aad28d62f097e27f522e695c41bfc524809e94d29d0fbe5f112612a21f28e" Namespace="calico-apiserver" Pod="calico-apiserver-6db9d89946-btghg" WorkloadEndpoint="ci--3815.2.4--a--f605b45a38-k8s-calico--apiserver--6db9d89946--btghg-eth0" Jun 25 14:55:15.895353 containerd[1492]: 2024-06-25 14:55:15.786 [INFO][5341] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cb2aad28d62f097e27f522e695c41bfc524809e94d29d0fbe5f112612a21f28e" HandleID="k8s-pod-network.cb2aad28d62f097e27f522e695c41bfc524809e94d29d0fbe5f112612a21f28e" Workload="ci--3815.2.4--a--f605b45a38-k8s-calico--apiserver--6db9d89946--btghg-eth0" Jun 25 14:55:15.895353 containerd[1492]: 2024-06-25 14:55:15.806 [INFO][5341] ipam_plugin.go 264: Auto assigning IP ContainerID="cb2aad28d62f097e27f522e695c41bfc524809e94d29d0fbe5f112612a21f28e" HandleID="k8s-pod-network.cb2aad28d62f097e27f522e695c41bfc524809e94d29d0fbe5f112612a21f28e" Workload="ci--3815.2.4--a--f605b45a38-k8s-calico--apiserver--6db9d89946--btghg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ebb20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3815.2.4-a-f605b45a38", "pod":"calico-apiserver-6db9d89946-btghg", "timestamp":"2024-06-25 14:55:15.786032993 +0000 UTC"}, Hostname:"ci-3815.2.4-a-f605b45a38", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 14:55:15.895353 containerd[1492]: 2024-06-25 14:55:15.806 [INFO][5341] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:55:15.895353 containerd[1492]: 2024-06-25 14:55:15.806 [INFO][5341] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:55:15.895353 containerd[1492]: 2024-06-25 14:55:15.806 [INFO][5341] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3815.2.4-a-f605b45a38' Jun 25 14:55:15.895353 containerd[1492]: 2024-06-25 14:55:15.808 [INFO][5341] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.cb2aad28d62f097e27f522e695c41bfc524809e94d29d0fbe5f112612a21f28e" host="ci-3815.2.4-a-f605b45a38" Jun 25 14:55:15.895353 containerd[1492]: 2024-06-25 14:55:15.820 [INFO][5341] ipam.go 372: Looking up existing affinities for host host="ci-3815.2.4-a-f605b45a38" Jun 25 14:55:15.895353 containerd[1492]: 2024-06-25 14:55:15.824 [INFO][5341] ipam.go 489: Trying affinity for 192.168.19.64/26 host="ci-3815.2.4-a-f605b45a38" Jun 25 14:55:15.895353 containerd[1492]: 2024-06-25 14:55:15.826 [INFO][5341] ipam.go 155: Attempting to load block cidr=192.168.19.64/26 host="ci-3815.2.4-a-f605b45a38" Jun 25 14:55:15.895353 containerd[1492]: 2024-06-25 14:55:15.828 [INFO][5341] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.19.64/26 host="ci-3815.2.4-a-f605b45a38" Jun 25 14:55:15.895353 containerd[1492]: 2024-06-25 14:55:15.828 [INFO][5341] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.19.64/26 handle="k8s-pod-network.cb2aad28d62f097e27f522e695c41bfc524809e94d29d0fbe5f112612a21f28e" host="ci-3815.2.4-a-f605b45a38" Jun 25 14:55:15.895353 containerd[1492]: 2024-06-25 14:55:15.833 [INFO][5341] ipam.go 1685: Creating new handle: k8s-pod-network.cb2aad28d62f097e27f522e695c41bfc524809e94d29d0fbe5f112612a21f28e Jun 25 14:55:15.895353 containerd[1492]: 2024-06-25 14:55:15.836 [INFO][5341] ipam.go 1203: Writing block in order to claim IPs block=192.168.19.64/26 handle="k8s-pod-network.cb2aad28d62f097e27f522e695c41bfc524809e94d29d0fbe5f112612a21f28e" host="ci-3815.2.4-a-f605b45a38" Jun 25 14:55:15.895353 containerd[1492]: 2024-06-25 14:55:15.841 [INFO][5341] ipam.go 1216: Successfully claimed IPs: [192.168.19.70/26] block=192.168.19.64/26 handle="k8s-pod-network.cb2aad28d62f097e27f522e695c41bfc524809e94d29d0fbe5f112612a21f28e" host="ci-3815.2.4-a-f605b45a38" Jun 25 14:55:15.895353 containerd[1492]: 2024-06-25 14:55:15.841 [INFO][5341] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.19.70/26] handle="k8s-pod-network.cb2aad28d62f097e27f522e695c41bfc524809e94d29d0fbe5f112612a21f28e" host="ci-3815.2.4-a-f605b45a38" Jun 25 14:55:15.895353 containerd[1492]: 2024-06-25 14:55:15.841 [INFO][5341] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:55:15.895353 containerd[1492]: 2024-06-25 14:55:15.842 [INFO][5341] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.19.70/26] IPv6=[] ContainerID="cb2aad28d62f097e27f522e695c41bfc524809e94d29d0fbe5f112612a21f28e" HandleID="k8s-pod-network.cb2aad28d62f097e27f522e695c41bfc524809e94d29d0fbe5f112612a21f28e" Workload="ci--3815.2.4--a--f605b45a38-k8s-calico--apiserver--6db9d89946--btghg-eth0" Jun 25 14:55:15.895933 containerd[1492]: 2024-06-25 14:55:15.843 [INFO][5313] k8s.go 386: Populated endpoint ContainerID="cb2aad28d62f097e27f522e695c41bfc524809e94d29d0fbe5f112612a21f28e" Namespace="calico-apiserver" Pod="calico-apiserver-6db9d89946-btghg" WorkloadEndpoint="ci--3815.2.4--a--f605b45a38-k8s-calico--apiserver--6db9d89946--btghg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--f605b45a38-k8s-calico--apiserver--6db9d89946--btghg-eth0", GenerateName:"calico-apiserver-6db9d89946-", Namespace:"calico-apiserver", SelfLink:"", UID:"9d519ab2-cfe3-4d75-b329-d0089a6b05a3", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 55, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6db9d89946", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-f605b45a38", ContainerID:"", Pod:"calico-apiserver-6db9d89946-btghg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali30fdc386370", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:55:15.895933 containerd[1492]: 2024-06-25 14:55:15.843 [INFO][5313] k8s.go 387: Calico CNI using IPs: [192.168.19.70/32] ContainerID="cb2aad28d62f097e27f522e695c41bfc524809e94d29d0fbe5f112612a21f28e" Namespace="calico-apiserver" Pod="calico-apiserver-6db9d89946-btghg" WorkloadEndpoint="ci--3815.2.4--a--f605b45a38-k8s-calico--apiserver--6db9d89946--btghg-eth0" Jun 25 14:55:15.895933 containerd[1492]: 2024-06-25 14:55:15.843 [INFO][5313] dataplane_linux.go 68: Setting the host side veth name to cali30fdc386370 ContainerID="cb2aad28d62f097e27f522e695c41bfc524809e94d29d0fbe5f112612a21f28e" Namespace="calico-apiserver" Pod="calico-apiserver-6db9d89946-btghg" WorkloadEndpoint="ci--3815.2.4--a--f605b45a38-k8s-calico--apiserver--6db9d89946--btghg-eth0" Jun 25 14:55:15.895933 containerd[1492]: 2024-06-25 14:55:15.881 [INFO][5313] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="cb2aad28d62f097e27f522e695c41bfc524809e94d29d0fbe5f112612a21f28e" Namespace="calico-apiserver" Pod="calico-apiserver-6db9d89946-btghg" WorkloadEndpoint="ci--3815.2.4--a--f605b45a38-k8s-calico--apiserver--6db9d89946--btghg-eth0" Jun 25 14:55:15.895933 containerd[1492]: 2024-06-25 14:55:15.882 [INFO][5313] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="cb2aad28d62f097e27f522e695c41bfc524809e94d29d0fbe5f112612a21f28e" Namespace="calico-apiserver" Pod="calico-apiserver-6db9d89946-btghg" WorkloadEndpoint="ci--3815.2.4--a--f605b45a38-k8s-calico--apiserver--6db9d89946--btghg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--f605b45a38-k8s-calico--apiserver--6db9d89946--btghg-eth0", GenerateName:"calico-apiserver-6db9d89946-", Namespace:"calico-apiserver", SelfLink:"", UID:"9d519ab2-cfe3-4d75-b329-d0089a6b05a3", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 55, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6db9d89946", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-f605b45a38", ContainerID:"cb2aad28d62f097e27f522e695c41bfc524809e94d29d0fbe5f112612a21f28e", Pod:"calico-apiserver-6db9d89946-btghg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali30fdc386370", MAC:"ea:d6:77:2e:47:62", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:55:15.895933 containerd[1492]: 2024-06-25 14:55:15.892 [INFO][5313] k8s.go 500: Wrote updated endpoint to datastore ContainerID="cb2aad28d62f097e27f522e695c41bfc524809e94d29d0fbe5f112612a21f28e" Namespace="calico-apiserver" Pod="calico-apiserver-6db9d89946-btghg" WorkloadEndpoint="ci--3815.2.4--a--f605b45a38-k8s-calico--apiserver--6db9d89946--btghg-eth0" Jun 25 14:55:15.905000 audit[5398]: NETFILTER_CFG table=filter:127 family=2 entries=49 op=nft_register_chain pid=5398 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:55:15.905000 audit[5398]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24300 a0=3 a1=ffffe7ac45b0 a2=0 a3=ffffbed00fa8 items=0 ppid=4188 pid=5398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:15.905000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:55:15.931652 containerd[1492]: time="2024-06-25T14:55:15.931502667Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:55:15.931652 containerd[1492]: time="2024-06-25T14:55:15.931592788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:55:15.931652 containerd[1492]: time="2024-06-25T14:55:15.931612788Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:55:15.931652 containerd[1492]: time="2024-06-25T14:55:15.931626548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:55:15.948988 systemd[1]: Started cri-containerd-cb2aad28d62f097e27f522e695c41bfc524809e94d29d0fbe5f112612a21f28e.scope - libcontainer container cb2aad28d62f097e27f522e695c41bfc524809e94d29d0fbe5f112612a21f28e. Jun 25 14:55:15.969000 audit: BPF prog-id=220 op=LOAD Jun 25 14:55:15.969000 audit: BPF prog-id=221 op=LOAD Jun 25 14:55:15.969000 audit[5421]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001318b0 a2=78 a3=0 items=0 ppid=5411 pid=5421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:15.969000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362326161643238643632663039376532376635323265363935633431 Jun 25 14:55:15.969000 audit: BPF prog-id=222 op=LOAD Jun 25 14:55:15.969000 audit[5421]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000131640 a2=78 a3=0 items=0 ppid=5411 pid=5421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:15.969000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362326161643238643632663039376532376635323265363935633431 Jun 25 14:55:15.970000 audit: BPF prog-id=222 op=UNLOAD Jun 25 14:55:15.970000 audit: BPF prog-id=221 op=UNLOAD Jun 25 14:55:15.970000 audit: BPF prog-id=223 op=LOAD Jun 25 14:55:15.970000 audit[5421]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000131b10 a2=78 a3=0 items=0 ppid=5411 pid=5421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:15.970000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362326161643238643632663039376532376635323265363935633431 Jun 25 14:55:15.990503 containerd[1492]: time="2024-06-25T14:55:15.990463329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6db9d89946-btghg,Uid:9d519ab2-cfe3-4d75-b329-d0089a6b05a3,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"cb2aad28d62f097e27f522e695c41bfc524809e94d29d0fbe5f112612a21f28e\"" Jun 25 14:55:16.996983 systemd-networkd[1255]: cali30fdc386370: Gained IPv6LL Jun 25 14:55:17.767406 systemd-networkd[1255]: calie7a70f54dfa: Gained IPv6LL Jun 25 14:55:18.759609 containerd[1492]: time="2024-06-25T14:55:18.759545304Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:55:18.763717 containerd[1492]: time="2024-06-25T14:55:18.763675428Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=37831527" Jun 25 14:55:18.767178 containerd[1492]: time="2024-06-25T14:55:18.767142706Z" level=info msg="ImageCreate event name:\"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:55:18.776135 containerd[1492]: time="2024-06-25T14:55:18.776100082Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:55:18.781879 containerd[1492]: time="2024-06-25T14:55:18.781848224Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:55:18.782622 containerd[1492]: time="2024-06-25T14:55:18.782583272Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"39198111\" in 2.89070861s" Jun 25 14:55:18.782688 containerd[1492]: time="2024-06-25T14:55:18.782622233Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\"" Jun 25 14:55:18.785453 containerd[1492]: time="2024-06-25T14:55:18.785413263Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jun 25 14:55:18.787385 containerd[1492]: time="2024-06-25T14:55:18.787357724Z" level=info msg="CreateContainer within sandbox \"cd2a20b8b37b63292825142f998e22878c62af68bf92b08f3a03db5cc682ad9e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 25 14:55:18.825469 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3940811483.mount: Deactivated successfully. Jun 25 14:55:18.844206 containerd[1492]: time="2024-06-25T14:55:18.844159897Z" level=info msg="CreateContainer within sandbox \"cd2a20b8b37b63292825142f998e22878c62af68bf92b08f3a03db5cc682ad9e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b684f7b11aef9afc44eaef08023b0e8caeb073ebbee426b3f96214b7f56bc4fe\"" Jun 25 14:55:18.845738 containerd[1492]: time="2024-06-25T14:55:18.845710274Z" level=info msg="StartContainer for \"b684f7b11aef9afc44eaef08023b0e8caeb073ebbee426b3f96214b7f56bc4fe\"" Jun 25 14:55:18.878933 systemd[1]: Started cri-containerd-b684f7b11aef9afc44eaef08023b0e8caeb073ebbee426b3f96214b7f56bc4fe.scope - libcontainer container b684f7b11aef9afc44eaef08023b0e8caeb073ebbee426b3f96214b7f56bc4fe. Jun 25 14:55:18.887000 audit: BPF prog-id=224 op=LOAD Jun 25 14:55:18.888000 audit: BPF prog-id=225 op=LOAD Jun 25 14:55:18.888000 audit[5459]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400010d8b0 a2=78 a3=0 items=0 ppid=5354 pid=5459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:18.888000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236383466376231316165663961666334346561656630383032336230 Jun 25 14:55:18.888000 audit: BPF prog-id=226 op=LOAD Jun 25 14:55:18.888000 audit[5459]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400010d640 a2=78 a3=0 items=0 ppid=5354 pid=5459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:18.888000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236383466376231316165663961666334346561656630383032336230 Jun 25 14:55:18.888000 audit: BPF prog-id=226 op=UNLOAD Jun 25 14:55:18.888000 audit: BPF prog-id=225 op=UNLOAD Jun 25 14:55:18.888000 audit: BPF prog-id=227 op=LOAD Jun 25 14:55:18.888000 audit[5459]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400010db10 a2=78 a3=0 items=0 ppid=5354 pid=5459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:18.888000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236383466376231316165663961666334346561656630383032336230 Jun 25 14:55:18.913260 containerd[1492]: time="2024-06-25T14:55:18.913216643Z" level=info msg="StartContainer for \"b684f7b11aef9afc44eaef08023b0e8caeb073ebbee426b3f96214b7f56bc4fe\" returns successfully" Jun 25 14:55:19.408104 containerd[1492]: time="2024-06-25T14:55:19.408047492Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:55:19.410639 containerd[1492]: time="2024-06-25T14:55:19.410608999Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=77" Jun 25 14:55:19.414752 containerd[1492]: time="2024-06-25T14:55:19.414723923Z" level=info msg="ImageUpdate event name:\"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:55:19.421213 containerd[1492]: time="2024-06-25T14:55:19.421181112Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:55:19.436638 containerd[1492]: time="2024-06-25T14:55:19.436596916Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:55:19.438281 containerd[1492]: time="2024-06-25T14:55:19.438250454Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"39198111\" in 652.68667ms" Jun 25 14:55:19.438415 containerd[1492]: time="2024-06-25T14:55:19.438396176Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\"" Jun 25 14:55:19.440796 containerd[1492]: time="2024-06-25T14:55:19.440753321Z" level=info msg="CreateContainer within sandbox \"cb2aad28d62f097e27f522e695c41bfc524809e94d29d0fbe5f112612a21f28e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 25 14:55:19.488775 containerd[1492]: time="2024-06-25T14:55:19.488719752Z" level=info msg="CreateContainer within sandbox \"cb2aad28d62f097e27f522e695c41bfc524809e94d29d0fbe5f112612a21f28e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1bec068089a1479e1cdafb3bfa2bdfa0bbad0902989705ed727ae22144680711\"" Jun 25 14:55:19.489633 containerd[1492]: time="2024-06-25T14:55:19.489568921Z" level=info msg="StartContainer for \"1bec068089a1479e1cdafb3bfa2bdfa0bbad0902989705ed727ae22144680711\"" Jun 25 14:55:19.513956 systemd[1]: Started cri-containerd-1bec068089a1479e1cdafb3bfa2bdfa0bbad0902989705ed727ae22144680711.scope - libcontainer container 1bec068089a1479e1cdafb3bfa2bdfa0bbad0902989705ed727ae22144680711. Jun 25 14:55:19.529000 audit: BPF prog-id=228 op=LOAD Jun 25 14:55:19.530000 audit: BPF prog-id=229 op=LOAD Jun 25 14:55:19.530000 audit[5509]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001318b0 a2=78 a3=0 items=0 ppid=5411 pid=5509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:19.530000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162656330363830383961313437396531636461666233626661326264 Jun 25 14:55:19.530000 audit: BPF prog-id=230 op=LOAD Jun 25 14:55:19.530000 audit[5509]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000131640 a2=78 a3=0 items=0 ppid=5411 pid=5509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:19.530000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162656330363830383961313437396531636461666233626661326264 Jun 25 14:55:19.530000 audit: BPF prog-id=230 op=UNLOAD Jun 25 14:55:19.530000 audit: BPF prog-id=229 op=UNLOAD Jun 25 14:55:19.530000 audit: BPF prog-id=231 op=LOAD Jun 25 14:55:19.530000 audit[5509]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000131b10 a2=78 a3=0 items=0 ppid=5411 pid=5509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:19.530000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162656330363830383961313437396531636461666233626661326264 Jun 25 14:55:19.556465 containerd[1492]: time="2024-06-25T14:55:19.556359154Z" level=info msg="StartContainer for \"1bec068089a1479e1cdafb3bfa2bdfa0bbad0902989705ed727ae22144680711\" returns successfully" Jun 25 14:55:19.822071 systemd[1]: run-containerd-runc-k8s.io-b684f7b11aef9afc44eaef08023b0e8caeb073ebbee426b3f96214b7f56bc4fe-runc.FmzENS.mount: Deactivated successfully. Jun 25 14:55:19.830877 kubelet[2843]: I0625 14:55:19.830844 2843 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6db9d89946-btghg" podStartSLOduration=2.384026127 podCreationTimestamp="2024-06-25 14:55:14 +0000 UTC" firstStartedPulling="2024-06-25 14:55:15.991907706 +0000 UTC m=+62.560065051" lastFinishedPulling="2024-06-25 14:55:19.438683019 +0000 UTC m=+66.006840364" observedRunningTime="2024-06-25 14:55:19.806192738 +0000 UTC m=+66.374350083" watchObservedRunningTime="2024-06-25 14:55:19.83080144 +0000 UTC m=+66.398958785" Jun 25 14:55:19.841000 audit[5561]: NETFILTER_CFG table=filter:128 family=2 entries=10 op=nft_register_rule pid=5561 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:55:19.846401 kernel: kauditd_printk_skb: 56 callbacks suppressed Jun 25 14:55:19.846495 kernel: audit: type=1325 audit(1719327319.841:664): table=filter:128 family=2 entries=10 op=nft_register_rule pid=5561 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:55:19.841000 audit[5561]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3676 a0=3 a1=ffffea5c19f0 a2=0 a3=1 items=0 ppid=3025 pid=5561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:19.885913 kernel: audit: type=1300 audit(1719327319.841:664): arch=c00000b7 syscall=211 success=yes exit=3676 a0=3 a1=ffffea5c19f0 a2=0 a3=1 items=0 ppid=3025 pid=5561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:19.841000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:55:19.887917 kernel: audit: type=1327 audit(1719327319.841:664): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:55:19.841000 audit[5561]: NETFILTER_CFG table=nat:129 family=2 entries=20 op=nft_register_rule pid=5561 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:55:19.910851 kernel: audit: type=1325 audit(1719327319.841:665): table=nat:129 family=2 entries=20 op=nft_register_rule pid=5561 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:55:19.841000 audit[5561]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffea5c19f0 a2=0 a3=1 items=0 ppid=3025 pid=5561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:19.934615 kernel: audit: type=1300 audit(1719327319.841:665): arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffea5c19f0 a2=0 a3=1 items=0 ppid=3025 pid=5561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:19.841000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:55:19.947013 kernel: audit: type=1327 audit(1719327319.841:665): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:55:19.935000 audit[5563]: NETFILTER_CFG table=filter:130 family=2 entries=10 op=nft_register_rule pid=5563 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:55:19.959176 kernel: audit: type=1325 audit(1719327319.935:666): table=filter:130 family=2 entries=10 op=nft_register_rule pid=5563 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:55:19.935000 audit[5563]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3676 a0=3 a1=ffffc825bb00 a2=0 a3=1 items=0 ppid=3025 pid=5563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:19.983677 kernel: audit: type=1300 audit(1719327319.935:666): arch=c00000b7 syscall=211 success=yes exit=3676 a0=3 a1=ffffc825bb00 a2=0 a3=1 items=0 ppid=3025 pid=5563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:19.935000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:55:19.996964 kernel: audit: type=1327 audit(1719327319.935:666): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:55:19.935000 audit[5563]: NETFILTER_CFG table=nat:131 family=2 entries=20 op=nft_register_rule pid=5563 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:55:20.009309 kernel: audit: type=1325 audit(1719327319.935:667): table=nat:131 family=2 entries=20 op=nft_register_rule pid=5563 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:55:19.935000 audit[5563]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffc825bb00 a2=0 a3=1 items=0 ppid=3025 pid=5563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:19.935000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:55:20.661366 kubelet[2843]: I0625 14:55:20.661319 2843 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6db9d89946-gqwhg" podStartSLOduration=3.769317146 podCreationTimestamp="2024-06-25 14:55:14 +0000 UTC" firstStartedPulling="2024-06-25 14:55:15.891118333 +0000 UTC m=+62.459275678" lastFinishedPulling="2024-06-25 14:55:18.783045877 +0000 UTC m=+65.351203222" observedRunningTime="2024-06-25 14:55:19.830404836 +0000 UTC m=+66.398562181" watchObservedRunningTime="2024-06-25 14:55:20.66124469 +0000 UTC m=+67.229401995" Jun 25 14:55:21.010000 audit[5567]: NETFILTER_CFG table=filter:132 family=2 entries=9 op=nft_register_rule pid=5567 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:55:21.010000 audit[5567]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=ffffe62d5f40 a2=0 a3=1 items=0 ppid=3025 pid=5567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:21.010000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:55:21.013000 audit[5567]: NETFILTER_CFG table=nat:133 family=2 entries=31 op=nft_register_chain pid=5567 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:55:21.013000 audit[5567]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11236 a0=3 a1=ffffe62d5f40 a2=0 a3=1 items=0 ppid=3025 pid=5567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:21.013000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:55:26.020000 audit[2694]: AVC avc: denied { watch } for pid=2694 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c10,c345 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:55:26.024918 kernel: kauditd_printk_skb: 8 callbacks suppressed Jun 25 14:55:26.025020 kernel: audit: type=1400 audit(1719327326.020:670): avc: denied { watch } for pid=2694 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c10,c345 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:55:26.021000 audit[2694]: AVC avc: denied { watch } for pid=2694 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c10,c345 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:55:26.062647 kernel: audit: type=1400 audit(1719327326.021:671): avc: denied { watch } for pid=2694 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c10,c345 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:55:26.021000 audit[2694]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=b a1=4002a612e0 a2=fc6 a3=0 items=0 ppid=2545 pid=2694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c10,c345 key=(null) Jun 25 14:55:26.090028 kernel: audit: type=1300 audit(1719327326.021:671): arch=c00000b7 syscall=27 success=no exit=-13 a0=b a1=4002a612e0 a2=fc6 a3=0 items=0 ppid=2545 pid=2694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c10,c345 key=(null) Jun 25 14:55:26.021000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:55:26.112059 kernel: audit: type=1327 audit(1719327326.021:671): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:55:26.020000 audit[2694]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=40029a10e0 a2=fc6 a3=0 items=0 ppid=2545 pid=2694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c10,c345 key=(null) Jun 25 14:55:26.137981 kernel: audit: type=1300 audit(1719327326.020:670): arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=40029a10e0 a2=fc6 a3=0 items=0 ppid=2545 pid=2694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c10,c345 key=(null) Jun 25 14:55:26.020000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:55:26.160507 kernel: audit: type=1327 audit(1719327326.020:670): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:55:26.022000 audit[2694]: AVC avc: denied { watch } for pid=2694 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c10,c345 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:55:26.179610 kernel: audit: type=1400 audit(1719327326.022:672): avc: denied { watch } for pid=2694 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c10,c345 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:55:26.022000 audit[2694]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=40029a1100 a2=fc6 a3=0 items=0 ppid=2545 pid=2694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c10,c345 key=(null) Jun 25 14:55:26.204640 kernel: audit: type=1300 audit(1719327326.022:672): arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=40029a1100 a2=fc6 a3=0 items=0 ppid=2545 pid=2694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c10,c345 key=(null) Jun 25 14:55:26.022000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:55:26.226721 kernel: audit: type=1327 audit(1719327326.022:672): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:55:26.024000 audit[2694]: AVC avc: denied { watch } for pid=2694 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c10,c345 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:55:26.246087 kernel: audit: type=1400 audit(1719327326.024:673): avc: denied { watch } for pid=2694 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c10,c345 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:55:26.024000 audit[2694]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=4002a61600 a2=fc6 a3=0 items=0 ppid=2545 pid=2694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c10,c345 key=(null) Jun 25 14:55:26.024000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:55:33.479160 systemd[1]: run-containerd-runc-k8s.io-b0b8c45a1f1f6b9537bf08d95bea92f489c63eb0614a88497520d6fc93eb04e9-runc.zInnJ3.mount: Deactivated successfully. Jun 25 14:55:49.746087 systemd[1]: run-containerd-runc-k8s.io-c18e0067a66301b341060c63335cd979b8fea934046d43b19d5a30fa4f3247f5-runc.FTypoO.mount: Deactivated successfully. Jun 25 14:55:51.353935 systemd[1]: run-containerd-runc-k8s.io-c18e0067a66301b341060c63335cd979b8fea934046d43b19d5a30fa4f3247f5-runc.NflQ2j.mount: Deactivated successfully. Jun 25 14:56:03.475409 systemd[1]: run-containerd-runc-k8s.io-b0b8c45a1f1f6b9537bf08d95bea92f489c63eb0614a88497520d6fc93eb04e9-runc.SY4LZn.mount: Deactivated successfully. Jun 25 14:56:09.983000 audit[2733]: AVC avc: denied { watch } for pid=2733 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=3646736 scontext=system_u:system_r:container_t:s0:c134,c352 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:56:09.987817 kernel: kauditd_printk_skb: 2 callbacks suppressed Jun 25 14:56:09.987923 kernel: audit: type=1400 audit(1719327369.983:674): avc: denied { watch } for pid=2733 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=3646736 scontext=system_u:system_r:container_t:s0:c134,c352 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:56:09.983000 audit[2733]: AVC avc: denied { watch } for pid=2733 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=3646740 scontext=system_u:system_r:container_t:s0:c134,c352 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:56:10.026429 kernel: audit: type=1400 audit(1719327369.983:675): avc: denied { watch } for pid=2733 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=3646740 scontext=system_u:system_r:container_t:s0:c134,c352 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:56:09.983000 audit[2733]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=72 a1=400bbf34d0 a2=fc6 a3=0 items=0 ppid=2546 pid=2733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c134,c352 key=(null) Jun 25 14:56:10.050615 kernel: audit: type=1300 audit(1719327369.983:675): arch=c00000b7 syscall=27 success=no exit=-13 a0=72 a1=400bbf34d0 a2=fc6 a3=0 items=0 ppid=2546 pid=2733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c134,c352 key=(null) Jun 25 14:56:09.983000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E32302E3334002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 14:56:10.073128 kernel: audit: type=1327 audit(1719327369.983:675): proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E32302E3334002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 14:56:09.983000 audit[2733]: AVC avc: denied { watch } for pid=2733 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c134,c352 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:56:10.092141 kernel: audit: type=1400 audit(1719327369.983:676): avc: denied { watch } for pid=2733 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c134,c352 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:56:09.983000 audit[2733]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=72 a1=40130069c0 a2=fc6 a3=0 items=0 ppid=2546 pid=2733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c134,c352 key=(null) Jun 25 14:56:10.116483 kernel: audit: type=1300 audit(1719327369.983:676): arch=c00000b7 syscall=27 success=no exit=-13 a0=72 a1=40130069c0 a2=fc6 a3=0 items=0 ppid=2546 pid=2733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c134,c352 key=(null) Jun 25 14:56:09.983000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E32302E3334002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 14:56:10.137807 kernel: audit: type=1327 audit(1719327369.983:676): proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E32302E3334002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 14:56:09.983000 audit[2733]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=70 a1=400c27c330 a2=fc6 a3=0 items=0 ppid=2546 pid=2733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c134,c352 key=(null) Jun 25 14:56:10.161647 kernel: audit: type=1300 audit(1719327369.983:674): arch=c00000b7 syscall=27 success=no exit=-13 a0=70 a1=400c27c330 a2=fc6 a3=0 items=0 ppid=2546 pid=2733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c134,c352 key=(null) Jun 25 14:56:09.983000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E32302E3334002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 14:56:10.182521 kernel: audit: type=1327 audit(1719327369.983:674): proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E32302E3334002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 14:56:09.987000 audit[2733]: AVC avc: denied { watch } for pid=2733 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=3646742 scontext=system_u:system_r:container_t:s0:c134,c352 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:56:10.202356 kernel: audit: type=1400 audit(1719327369.987:677): avc: denied { watch } for pid=2733 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=3646742 scontext=system_u:system_r:container_t:s0:c134,c352 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:56:09.987000 audit[2733]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=70 a1=400c27c390 a2=fc6 a3=0 items=0 ppid=2546 pid=2733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c134,c352 key=(null) Jun 25 14:56:09.987000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E32302E3334002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 14:56:10.019000 audit[2733]: AVC avc: denied { watch } for pid=2733 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c134,c352 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:56:10.019000 audit[2733]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=72 a1=40134b2820 a2=fc6 a3=0 items=0 ppid=2546 pid=2733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c134,c352 key=(null) Jun 25 14:56:10.019000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E32302E3334002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 14:56:10.030000 audit[2733]: AVC avc: denied { watch } for pid=2733 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=3646740 scontext=system_u:system_r:container_t:s0:c134,c352 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:56:10.030000 audit[2733]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=72 a1=400bbf3980 a2=fc6 a3=0 items=0 ppid=2546 pid=2733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c134,c352 key=(null) Jun 25 14:56:10.030000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E32302E3334002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 14:56:10.731000 audit[2694]: AVC avc: denied { watch } for pid=2694 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=3646740 scontext=system_u:system_r:container_t:s0:c10,c345 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:56:10.731000 audit[2694]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=400246ba10 a2=fc6 a3=0 items=0 ppid=2545 pid=2694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c10,c345 key=(null) Jun 25 14:56:10.731000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:56:10.731000 audit[2694]: AVC avc: denied { watch } for pid=2694 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c10,c345 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:56:10.731000 audit[2694]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=b a1=400114e3e0 a2=fc6 a3=0 items=0 ppid=2545 pid=2694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c10,c345 key=(null) Jun 25 14:56:10.731000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:56:11.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.34:22-10.200.16.10:38058 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:56:11.168181 systemd[1]: Started sshd@7-10.200.20.34:22-10.200.16.10:38058.service - OpenSSH per-connection server daemon (10.200.16.10:38058). Jun 25 14:56:11.614707 sshd[5699]: Accepted publickey for core from 10.200.16.10 port 38058 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:56:11.612000 audit[5699]: USER_ACCT pid=5699 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:11.614000 audit[5699]: CRED_ACQ pid=5699 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:11.614000 audit[5699]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdbe23c20 a2=3 a3=1 items=0 ppid=1 pid=5699 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:11.614000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:56:11.616968 sshd[5699]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:56:11.621727 systemd-logind[1476]: New session 10 of user core. Jun 25 14:56:11.625962 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 25 14:56:11.630000 audit[5699]: USER_START pid=5699 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:11.632000 audit[5701]: CRED_ACQ pid=5701 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:12.011600 sshd[5699]: pam_unix(sshd:session): session closed for user core Jun 25 14:56:12.011000 audit[5699]: USER_END pid=5699 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:12.011000 audit[5699]: CRED_DISP pid=5699 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:12.015077 systemd[1]: sshd@7-10.200.20.34:22-10.200.16.10:38058.service: Deactivated successfully. Jun 25 14:56:12.013000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.34:22-10.200.16.10:38058 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:56:12.015921 systemd[1]: session-10.scope: Deactivated successfully. Jun 25 14:56:12.016554 systemd-logind[1476]: Session 10 logged out. Waiting for processes to exit. Jun 25 14:56:12.017413 systemd-logind[1476]: Removed session 10. Jun 25 14:56:17.092985 systemd[1]: Started sshd@8-10.200.20.34:22-10.200.16.10:34922.service - OpenSSH per-connection server daemon (10.200.16.10:34922). Jun 25 14:56:17.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.34:22-10.200.16.10:34922 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:56:17.098527 kernel: kauditd_printk_skb: 25 callbacks suppressed Jun 25 14:56:17.098585 kernel: audit: type=1130 audit(1719327377.092:691): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.34:22-10.200.16.10:34922 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:56:17.554000 audit[5719]: USER_ACCT pid=5719 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:17.556022 sshd[5719]: Accepted publickey for core from 10.200.16.10 port 34922 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:56:17.578000 audit[5719]: CRED_ACQ pid=5719 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:17.579888 sshd[5719]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:56:17.598180 kernel: audit: type=1101 audit(1719327377.554:692): pid=5719 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:17.598295 kernel: audit: type=1103 audit(1719327377.578:693): pid=5719 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:17.611372 kernel: audit: type=1006 audit(1719327377.578:694): pid=5719 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jun 25 14:56:17.578000 audit[5719]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcd5bb740 a2=3 a3=1 items=0 ppid=1 pid=5719 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:17.632947 kernel: audit: type=1300 audit(1719327377.578:694): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcd5bb740 a2=3 a3=1 items=0 ppid=1 pid=5719 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:17.578000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:56:17.640298 systemd-logind[1476]: New session 11 of user core. Jun 25 14:56:17.645509 kernel: audit: type=1327 audit(1719327377.578:694): proctitle=737368643A20636F7265205B707269765D Jun 25 14:56:17.645001 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 25 14:56:17.649000 audit[5719]: USER_START pid=5719 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:17.651000 audit[5721]: CRED_ACQ pid=5721 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:17.692926 kernel: audit: type=1105 audit(1719327377.649:695): pid=5719 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:17.693078 kernel: audit: type=1103 audit(1719327377.651:696): pid=5721 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:17.961658 sshd[5719]: pam_unix(sshd:session): session closed for user core Jun 25 14:56:17.962000 audit[5719]: USER_END pid=5719 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:17.965967 systemd-logind[1476]: Session 11 logged out. Waiting for processes to exit. Jun 25 14:56:17.967208 systemd[1]: session-11.scope: Deactivated successfully. Jun 25 14:56:17.968464 systemd-logind[1476]: Removed session 11. Jun 25 14:56:17.969133 systemd[1]: sshd@8-10.200.20.34:22-10.200.16.10:34922.service: Deactivated successfully. Jun 25 14:56:17.962000 audit[5719]: CRED_DISP pid=5719 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:18.003764 kernel: audit: type=1106 audit(1719327377.962:697): pid=5719 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:18.007344 kernel: audit: type=1104 audit(1719327377.962:698): pid=5719 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:17.968000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.34:22-10.200.16.10:34922 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:56:19.746252 systemd[1]: run-containerd-runc-k8s.io-c18e0067a66301b341060c63335cd979b8fea934046d43b19d5a30fa4f3247f5-runc.74h72R.mount: Deactivated successfully. Jun 25 14:56:23.044417 systemd[1]: Started sshd@9-10.200.20.34:22-10.200.16.10:34936.service - OpenSSH per-connection server daemon (10.200.16.10:34936). Jun 25 14:56:23.071675 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 14:56:23.071916 kernel: audit: type=1130 audit(1719327383.044:700): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.34:22-10.200.16.10:34936 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:56:23.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.34:22-10.200.16.10:34936 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:56:23.450000 audit[5764]: USER_ACCT pid=5764 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:23.452367 sshd[5764]: Accepted publickey for core from 10.200.16.10 port 34936 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:56:23.473880 kernel: audit: type=1101 audit(1719327383.450:701): pid=5764 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:23.473000 audit[5764]: CRED_ACQ pid=5764 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:23.474526 sshd[5764]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:56:23.506606 kernel: audit: type=1103 audit(1719327383.473:702): pid=5764 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:23.506711 kernel: audit: type=1006 audit(1719327383.473:703): pid=5764 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jun 25 14:56:23.473000 audit[5764]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff4400eb0 a2=3 a3=1 items=0 ppid=1 pid=5764 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:23.530180 kernel: audit: type=1300 audit(1719327383.473:703): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff4400eb0 a2=3 a3=1 items=0 ppid=1 pid=5764 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:23.473000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:56:23.538763 kernel: audit: type=1327 audit(1719327383.473:703): proctitle=737368643A20636F7265205B707269765D Jun 25 14:56:23.543307 systemd-logind[1476]: New session 12 of user core. Jun 25 14:56:23.549023 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 25 14:56:23.552000 audit[5764]: USER_START pid=5764 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:23.576000 audit[5766]: CRED_ACQ pid=5766 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:23.594948 kernel: audit: type=1105 audit(1719327383.552:704): pid=5764 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:23.595028 kernel: audit: type=1103 audit(1719327383.576:705): pid=5766 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:23.871032 sshd[5764]: pam_unix(sshd:session): session closed for user core Jun 25 14:56:23.871000 audit[5764]: USER_END pid=5764 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:23.871000 audit[5764]: CRED_DISP pid=5764 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:23.896180 systemd[1]: sshd@9-10.200.20.34:22-10.200.16.10:34936.service: Deactivated successfully. Jun 25 14:56:23.897015 systemd[1]: session-12.scope: Deactivated successfully. Jun 25 14:56:23.900109 systemd-logind[1476]: Session 12 logged out. Waiting for processes to exit. Jun 25 14:56:23.901084 systemd-logind[1476]: Removed session 12. Jun 25 14:56:23.913070 kernel: audit: type=1106 audit(1719327383.871:706): pid=5764 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:23.913150 kernel: audit: type=1104 audit(1719327383.871:707): pid=5764 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:23.895000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.34:22-10.200.16.10:34936 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:56:26.021000 audit[2694]: AVC avc: denied { watch } for pid=2694 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c10,c345 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:56:26.021000 audit[2694]: AVC avc: denied { watch } for pid=2694 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c10,c345 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:56:26.021000 audit[2694]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=b a1=4000a92a60 a2=fc6 a3=0 items=0 ppid=2545 pid=2694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c10,c345 key=(null) Jun 25 14:56:26.021000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:56:26.021000 audit[2694]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=4002663520 a2=fc6 a3=0 items=0 ppid=2545 pid=2694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c10,c345 key=(null) Jun 25 14:56:26.021000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:56:26.023000 audit[2694]: AVC avc: denied { watch } for pid=2694 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c10,c345 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:56:26.023000 audit[2694]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=4000a92de0 a2=fc6 a3=0 items=0 ppid=2545 pid=2694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c10,c345 key=(null) Jun 25 14:56:26.023000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:56:26.024000 audit[2694]: AVC avc: denied { watch } for pid=2694 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c10,c345 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:56:26.024000 audit[2694]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=4000a93260 a2=fc6 a3=0 items=0 ppid=2545 pid=2694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c10,c345 key=(null) Jun 25 14:56:26.024000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:56:28.968942 kernel: kauditd_printk_skb: 13 callbacks suppressed Jun 25 14:56:28.969078 kernel: audit: type=1130 audit(1719327388.945:713): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.20.34:22-10.200.16.10:52000 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:56:28.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.20.34:22-10.200.16.10:52000 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:56:28.945696 systemd[1]: Started sshd@10-10.200.20.34:22-10.200.16.10:52000.service - OpenSSH per-connection server daemon (10.200.16.10:52000). Jun 25 14:56:29.352000 audit[5779]: USER_ACCT pid=5779 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:29.353399 sshd[5779]: Accepted publickey for core from 10.200.16.10 port 52000 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:56:29.355284 sshd[5779]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:56:29.354000 audit[5779]: CRED_ACQ pid=5779 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:29.394084 kernel: audit: type=1101 audit(1719327389.352:714): pid=5779 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:29.394270 kernel: audit: type=1103 audit(1719327389.354:715): pid=5779 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:29.399361 systemd-logind[1476]: New session 13 of user core. Jun 25 14:56:29.429636 kernel: audit: type=1006 audit(1719327389.354:716): pid=5779 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jun 25 14:56:29.429673 kernel: audit: type=1300 audit(1719327389.354:716): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff9201a20 a2=3 a3=1 items=0 ppid=1 pid=5779 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:29.354000 audit[5779]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff9201a20 a2=3 a3=1 items=0 ppid=1 pid=5779 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:29.354000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:56:29.429086 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 25 14:56:29.436558 kernel: audit: type=1327 audit(1719327389.354:716): proctitle=737368643A20636F7265205B707269765D Jun 25 14:56:29.434000 audit[5779]: USER_START pid=5779 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:29.458956 kernel: audit: type=1105 audit(1719327389.434:717): pid=5779 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:29.436000 audit[5781]: CRED_ACQ pid=5781 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:29.480439 kernel: audit: type=1103 audit(1719327389.436:718): pid=5781 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:29.732007 sshd[5779]: pam_unix(sshd:session): session closed for user core Jun 25 14:56:29.732000 audit[5779]: USER_END pid=5779 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:29.735753 systemd-logind[1476]: Session 13 logged out. Waiting for processes to exit. Jun 25 14:56:29.737075 systemd[1]: session-13.scope: Deactivated successfully. Jun 25 14:56:29.738448 systemd-logind[1476]: Removed session 13. Jun 25 14:56:29.739108 systemd[1]: sshd@10-10.200.20.34:22-10.200.16.10:52000.service: Deactivated successfully. Jun 25 14:56:29.732000 audit[5779]: CRED_DISP pid=5779 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:29.774235 kernel: audit: type=1106 audit(1719327389.732:719): pid=5779 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:29.774329 kernel: audit: type=1104 audit(1719327389.732:720): pid=5779 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:29.738000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.20.34:22-10.200.16.10:52000 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:56:33.473313 systemd[1]: run-containerd-runc-k8s.io-b0b8c45a1f1f6b9537bf08d95bea92f489c63eb0614a88497520d6fc93eb04e9-runc.oD1Apq.mount: Deactivated successfully. Jun 25 14:56:34.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.20.34:22-10.200.16.10:59102 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:56:34.814066 systemd[1]: Started sshd@11-10.200.20.34:22-10.200.16.10:59102.service - OpenSSH per-connection server daemon (10.200.16.10:59102). Jun 25 14:56:34.819807 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 14:56:34.819894 kernel: audit: type=1130 audit(1719327394.813:722): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.20.34:22-10.200.16.10:59102 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:56:35.225000 audit[5833]: USER_ACCT pid=5833 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:35.247933 sshd[5833]: Accepted publickey for core from 10.200.16.10 port 59102 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:56:35.247000 audit[5833]: CRED_ACQ pid=5833 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:35.267250 kernel: audit: type=1101 audit(1719327395.225:723): pid=5833 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:35.267360 kernel: audit: type=1103 audit(1719327395.247:724): pid=5833 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:35.248632 sshd[5833]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:56:35.281596 kernel: audit: type=1006 audit(1719327395.247:725): pid=5833 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jun 25 14:56:35.281721 kernel: audit: type=1300 audit(1719327395.247:725): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff4b6f3e0 a2=3 a3=1 items=0 ppid=1 pid=5833 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:35.247000 audit[5833]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff4b6f3e0 a2=3 a3=1 items=0 ppid=1 pid=5833 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:35.254834 systemd-logind[1476]: New session 14 of user core. Jun 25 14:56:35.281084 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 25 14:56:35.247000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:56:35.308757 kernel: audit: type=1327 audit(1719327395.247:725): proctitle=737368643A20636F7265205B707269765D Jun 25 14:56:35.286000 audit[5833]: USER_START pid=5833 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:35.330929 kernel: audit: type=1105 audit(1719327395.286:726): pid=5833 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:35.287000 audit[5835]: CRED_ACQ pid=5835 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:35.350151 kernel: audit: type=1103 audit(1719327395.287:727): pid=5835 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:35.644022 sshd[5833]: pam_unix(sshd:session): session closed for user core Jun 25 14:56:35.645000 audit[5833]: USER_END pid=5833 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:35.648343 systemd[1]: session-14.scope: Deactivated successfully. Jun 25 14:56:35.649027 systemd[1]: sshd@11-10.200.20.34:22-10.200.16.10:59102.service: Deactivated successfully. Jun 25 14:56:35.650248 systemd-logind[1476]: Session 14 logged out. Waiting for processes to exit. Jun 25 14:56:35.651148 systemd-logind[1476]: Removed session 14. Jun 25 14:56:35.645000 audit[5833]: CRED_DISP pid=5833 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:35.688224 kernel: audit: type=1106 audit(1719327395.645:728): pid=5833 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:35.688351 kernel: audit: type=1104 audit(1719327395.645:729): pid=5833 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:35.645000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.20.34:22-10.200.16.10:59102 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:56:35.720139 systemd[1]: Started sshd@12-10.200.20.34:22-10.200.16.10:59104.service - OpenSSH per-connection server daemon (10.200.16.10:59104). Jun 25 14:56:35.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.34:22-10.200.16.10:59104 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:56:36.129000 audit[5846]: USER_ACCT pid=5846 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:36.130997 sshd[5846]: Accepted publickey for core from 10.200.16.10 port 59104 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:56:36.131000 audit[5846]: CRED_ACQ pid=5846 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:36.131000 audit[5846]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdc57b2d0 a2=3 a3=1 items=0 ppid=1 pid=5846 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:36.131000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:56:36.132745 sshd[5846]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:56:36.137857 systemd-logind[1476]: New session 15 of user core. Jun 25 14:56:36.140980 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 25 14:56:36.146000 audit[5846]: USER_START pid=5846 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:36.148000 audit[5850]: CRED_ACQ pid=5850 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:37.208265 sshd[5846]: pam_unix(sshd:session): session closed for user core Jun 25 14:56:37.209000 audit[5846]: USER_END pid=5846 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:37.209000 audit[5846]: CRED_DISP pid=5846 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:37.212527 systemd-logind[1476]: Session 15 logged out. Waiting for processes to exit. Jun 25 14:56:37.213000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.34:22-10.200.16.10:59104 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:56:37.213485 systemd[1]: session-15.scope: Deactivated successfully. Jun 25 14:56:37.214205 systemd[1]: sshd@12-10.200.20.34:22-10.200.16.10:59104.service: Deactivated successfully. Jun 25 14:56:37.215747 systemd-logind[1476]: Removed session 15. Jun 25 14:56:37.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.34:22-10.200.16.10:59120 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:56:37.306697 systemd[1]: Started sshd@13-10.200.20.34:22-10.200.16.10:59120.service - OpenSSH per-connection server daemon (10.200.16.10:59120). Jun 25 14:56:37.748000 audit[5859]: USER_ACCT pid=5859 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:37.749183 sshd[5859]: Accepted publickey for core from 10.200.16.10 port 59120 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:56:37.749000 audit[5859]: CRED_ACQ pid=5859 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:37.749000 audit[5859]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc751c3d0 a2=3 a3=1 items=0 ppid=1 pid=5859 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:37.749000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:56:37.751015 sshd[5859]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:56:37.756490 systemd-logind[1476]: New session 16 of user core. Jun 25 14:56:37.760010 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 25 14:56:37.764000 audit[5859]: USER_START pid=5859 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:37.765000 audit[5861]: CRED_ACQ pid=5861 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:38.132730 sshd[5859]: pam_unix(sshd:session): session closed for user core Jun 25 14:56:38.133000 audit[5859]: USER_END pid=5859 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:38.134000 audit[5859]: CRED_DISP pid=5859 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:38.136696 systemd-logind[1476]: Session 16 logged out. Waiting for processes to exit. Jun 25 14:56:38.136823 systemd[1]: session-16.scope: Deactivated successfully. Jun 25 14:56:38.138451 systemd[1]: sshd@13-10.200.20.34:22-10.200.16.10:59120.service: Deactivated successfully. Jun 25 14:56:38.137000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.34:22-10.200.16.10:59120 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:56:38.139458 systemd-logind[1476]: Removed session 16. Jun 25 14:56:43.210306 systemd[1]: Started sshd@14-10.200.20.34:22-10.200.16.10:59128.service - OpenSSH per-connection server daemon (10.200.16.10:59128). Jun 25 14:56:43.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.34:22-10.200.16.10:59128 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:56:43.214761 kernel: kauditd_printk_skb: 23 callbacks suppressed Jun 25 14:56:43.214984 kernel: audit: type=1130 audit(1719327403.208:749): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.34:22-10.200.16.10:59128 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:56:43.617000 audit[5877]: USER_ACCT pid=5877 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:43.621280 sshd[5877]: Accepted publickey for core from 10.200.16.10 port 59128 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:56:43.622241 sshd[5877]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:56:43.617000 audit[5877]: CRED_ACQ pid=5877 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:43.658696 kernel: audit: type=1101 audit(1719327403.617:750): pid=5877 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:43.658859 kernel: audit: type=1103 audit(1719327403.617:751): pid=5877 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:43.663480 systemd-logind[1476]: New session 17 of user core. Jun 25 14:56:43.702492 kernel: audit: type=1006 audit(1719327403.617:752): pid=5877 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jun 25 14:56:43.702523 kernel: audit: type=1300 audit(1719327403.617:752): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe3cc87f0 a2=3 a3=1 items=0 ppid=1 pid=5877 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:43.702550 kernel: audit: type=1327 audit(1719327403.617:752): proctitle=737368643A20636F7265205B707269765D Jun 25 14:56:43.617000 audit[5877]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe3cc87f0 a2=3 a3=1 items=0 ppid=1 pid=5877 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:43.617000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:56:43.702069 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 25 14:56:43.705000 audit[5877]: USER_START pid=5877 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:43.705000 audit[5879]: CRED_ACQ pid=5879 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:43.749720 kernel: audit: type=1105 audit(1719327403.705:753): pid=5877 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:43.749880 kernel: audit: type=1103 audit(1719327403.705:754): pid=5879 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:44.029032 sshd[5877]: pam_unix(sshd:session): session closed for user core Jun 25 14:56:44.028000 audit[5877]: USER_END pid=5877 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:44.032420 systemd[1]: session-17.scope: Deactivated successfully. Jun 25 14:56:44.033688 systemd[1]: sshd@14-10.200.20.34:22-10.200.16.10:59128.service: Deactivated successfully. Jun 25 14:56:44.053165 systemd-logind[1476]: Session 17 logged out. Waiting for processes to exit. Jun 25 14:56:44.028000 audit[5877]: CRED_DISP pid=5877 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:44.054375 systemd-logind[1476]: Removed session 17. Jun 25 14:56:44.074579 kernel: audit: type=1106 audit(1719327404.028:755): pid=5877 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:44.074725 kernel: audit: type=1104 audit(1719327404.028:756): pid=5877 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:44.032000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.34:22-10.200.16.10:59128 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:56:49.118752 systemd[1]: Started sshd@15-10.200.20.34:22-10.200.16.10:52942.service - OpenSSH per-connection server daemon (10.200.16.10:52942). Jun 25 14:56:49.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.34:22-10.200.16.10:52942 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:56:49.123708 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 14:56:49.123839 kernel: audit: type=1130 audit(1719327409.117:758): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.34:22-10.200.16.10:52942 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:56:49.559000 audit[5893]: USER_ACCT pid=5893 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:49.561814 sshd[5893]: Accepted publickey for core from 10.200.16.10 port 52942 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:56:49.582000 audit[5893]: CRED_ACQ pid=5893 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:49.584757 sshd[5893]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:56:49.591097 systemd-logind[1476]: New session 18 of user core. Jun 25 14:56:49.616836 kernel: audit: type=1101 audit(1719327409.559:759): pid=5893 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:49.616880 kernel: audit: type=1103 audit(1719327409.582:760): pid=5893 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:49.616912 kernel: audit: type=1006 audit(1719327409.582:761): pid=5893 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jun 25 14:56:49.616935 kernel: audit: type=1300 audit(1719327409.582:761): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffca83baa0 a2=3 a3=1 items=0 ppid=1 pid=5893 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:49.582000 audit[5893]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffca83baa0 a2=3 a3=1 items=0 ppid=1 pid=5893 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:49.616140 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 25 14:56:49.582000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:56:49.646088 kernel: audit: type=1327 audit(1719327409.582:761): proctitle=737368643A20636F7265205B707269765D Jun 25 14:56:49.637000 audit[5893]: USER_START pid=5893 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:49.669263 kernel: audit: type=1105 audit(1719327409.637:762): pid=5893 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:49.639000 audit[5895]: CRED_ACQ pid=5895 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:49.688436 kernel: audit: type=1103 audit(1719327409.639:763): pid=5895 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:49.744311 systemd[1]: run-containerd-runc-k8s.io-c18e0067a66301b341060c63335cd979b8fea934046d43b19d5a30fa4f3247f5-runc.GLsBOy.mount: Deactivated successfully. Jun 25 14:56:49.974553 sshd[5893]: pam_unix(sshd:session): session closed for user core Jun 25 14:56:49.974000 audit[5893]: USER_END pid=5893 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:49.978965 systemd-logind[1476]: Session 18 logged out. Waiting for processes to exit. Jun 25 14:56:49.980336 systemd[1]: session-18.scope: Deactivated successfully. Jun 25 14:56:49.981600 systemd-logind[1476]: Removed session 18. Jun 25 14:56:49.982560 systemd[1]: sshd@15-10.200.20.34:22-10.200.16.10:52942.service: Deactivated successfully. Jun 25 14:56:49.974000 audit[5893]: CRED_DISP pid=5893 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:50.021049 kernel: audit: type=1106 audit(1719327409.974:764): pid=5893 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:50.021193 kernel: audit: type=1104 audit(1719327409.974:765): pid=5893 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:49.980000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.34:22-10.200.16.10:52942 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:56:51.354183 systemd[1]: run-containerd-runc-k8s.io-c18e0067a66301b341060c63335cd979b8fea934046d43b19d5a30fa4f3247f5-runc.NisvDA.mount: Deactivated successfully. Jun 25 14:56:55.062738 systemd[1]: Started sshd@16-10.200.20.34:22-10.200.16.10:54480.service - OpenSSH per-connection server daemon (10.200.16.10:54480). Jun 25 14:56:55.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.20.34:22-10.200.16.10:54480 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:56:55.068973 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 14:56:55.069101 kernel: audit: type=1130 audit(1719327415.062:767): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.20.34:22-10.200.16.10:54480 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:56:55.505000 audit[5952]: USER_ACCT pid=5952 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:55.506600 sshd[5952]: Accepted publickey for core from 10.200.16.10 port 54480 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:56:55.508565 sshd[5952]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:56:55.515088 systemd-logind[1476]: New session 19 of user core. Jun 25 14:56:55.561795 kernel: audit: type=1101 audit(1719327415.505:768): pid=5952 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:55.561838 kernel: audit: type=1103 audit(1719327415.507:769): pid=5952 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:55.561866 kernel: audit: type=1006 audit(1719327415.507:770): pid=5952 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jun 25 14:56:55.561901 kernel: audit: type=1300 audit(1719327415.507:770): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc7aea360 a2=3 a3=1 items=0 ppid=1 pid=5952 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:55.507000 audit[5952]: CRED_ACQ pid=5952 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:55.507000 audit[5952]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc7aea360 a2=3 a3=1 items=0 ppid=1 pid=5952 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:55.561222 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 25 14:56:55.507000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:56:55.592664 kernel: audit: type=1327 audit(1719327415.507:770): proctitle=737368643A20636F7265205B707269765D Jun 25 14:56:55.567000 audit[5952]: USER_START pid=5952 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:55.616950 kernel: audit: type=1105 audit(1719327415.567:771): pid=5952 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:55.570000 audit[5954]: CRED_ACQ pid=5954 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:55.637185 kernel: audit: type=1103 audit(1719327415.570:772): pid=5954 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:55.934210 sshd[5952]: pam_unix(sshd:session): session closed for user core Jun 25 14:56:55.934000 audit[5952]: USER_END pid=5952 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:55.937665 systemd-logind[1476]: Session 19 logged out. Waiting for processes to exit. Jun 25 14:56:55.939083 systemd[1]: session-19.scope: Deactivated successfully. Jun 25 14:56:55.940323 systemd-logind[1476]: Removed session 19. Jun 25 14:56:55.941078 systemd[1]: sshd@16-10.200.20.34:22-10.200.16.10:54480.service: Deactivated successfully. Jun 25 14:56:55.934000 audit[5952]: CRED_DISP pid=5952 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:55.978664 kernel: audit: type=1106 audit(1719327415.934:773): pid=5952 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:55.978827 kernel: audit: type=1104 audit(1719327415.934:774): pid=5952 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:55.940000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.20.34:22-10.200.16.10:54480 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:57:01.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.20.34:22-10.200.16.10:54490 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:57:01.019227 systemd[1]: Started sshd@17-10.200.20.34:22-10.200.16.10:54490.service - OpenSSH per-connection server daemon (10.200.16.10:54490). Jun 25 14:57:01.023815 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 14:57:01.023950 kernel: audit: type=1130 audit(1719327421.018:776): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.20.34:22-10.200.16.10:54490 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:57:01.469000 audit[5965]: USER_ACCT pid=5965 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:01.471344 sshd[5965]: Accepted publickey for core from 10.200.16.10 port 54490 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:57:01.473401 sshd[5965]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:57:01.472000 audit[5965]: CRED_ACQ pid=5965 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:01.512719 kernel: audit: type=1101 audit(1719327421.469:777): pid=5965 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:01.512856 kernel: audit: type=1103 audit(1719327421.472:778): pid=5965 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:01.517570 systemd-logind[1476]: New session 20 of user core. Jun 25 14:57:01.527611 kernel: audit: type=1006 audit(1719327421.472:779): pid=5965 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Jun 25 14:57:01.527651 kernel: audit: type=1300 audit(1719327421.472:779): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc9127840 a2=3 a3=1 items=0 ppid=1 pid=5965 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:01.472000 audit[5965]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc9127840 a2=3 a3=1 items=0 ppid=1 pid=5965 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:01.527057 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 25 14:57:01.472000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:57:01.554125 kernel: audit: type=1327 audit(1719327421.472:779): proctitle=737368643A20636F7265205B707269765D Jun 25 14:57:01.532000 audit[5965]: USER_START pid=5965 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:01.577098 kernel: audit: type=1105 audit(1719327421.532:780): pid=5965 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:01.536000 audit[5967]: CRED_ACQ pid=5967 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:01.595885 kernel: audit: type=1103 audit(1719327421.536:781): pid=5967 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:01.858047 sshd[5965]: pam_unix(sshd:session): session closed for user core Jun 25 14:57:01.858000 audit[5965]: USER_END pid=5965 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:01.861962 systemd-logind[1476]: Session 20 logged out. Waiting for processes to exit. Jun 25 14:57:01.863540 systemd[1]: session-20.scope: Deactivated successfully. Jun 25 14:57:01.865022 systemd-logind[1476]: Removed session 20. Jun 25 14:57:01.865847 systemd[1]: sshd@17-10.200.20.34:22-10.200.16.10:54490.service: Deactivated successfully. Jun 25 14:57:01.859000 audit[5965]: CRED_DISP pid=5965 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:01.900576 kernel: audit: type=1106 audit(1719327421.858:782): pid=5965 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:01.900747 kernel: audit: type=1104 audit(1719327421.859:783): pid=5965 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:01.865000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.20.34:22-10.200.16.10:54490 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:57:03.473459 systemd[1]: run-containerd-runc-k8s.io-b0b8c45a1f1f6b9537bf08d95bea92f489c63eb0614a88497520d6fc93eb04e9-runc.uuKfTa.mount: Deactivated successfully. Jun 25 14:57:06.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.20.34:22-10.200.16.10:55934 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:57:06.931105 systemd[1]: Started sshd@18-10.200.20.34:22-10.200.16.10:55934.service - OpenSSH per-connection server daemon (10.200.16.10:55934). Jun 25 14:57:06.935160 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 14:57:06.935285 kernel: audit: type=1130 audit(1719327426.930:785): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.20.34:22-10.200.16.10:55934 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:57:07.339000 audit[6004]: USER_ACCT pid=6004 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:07.340422 sshd[6004]: Accepted publickey for core from 10.200.16.10 port 55934 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:57:07.360000 audit[6004]: CRED_ACQ pid=6004 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:07.362404 sshd[6004]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:57:07.368850 systemd-logind[1476]: New session 21 of user core. Jun 25 14:57:07.419658 kernel: audit: type=1101 audit(1719327427.339:786): pid=6004 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:07.419690 kernel: audit: type=1103 audit(1719327427.360:787): pid=6004 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:07.419709 kernel: audit: type=1006 audit(1719327427.361:788): pid=6004 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jun 25 14:57:07.419728 kernel: audit: type=1300 audit(1719327427.361:788): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe3c483c0 a2=3 a3=1 items=0 ppid=1 pid=6004 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:07.361000 audit[6004]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe3c483c0 a2=3 a3=1 items=0 ppid=1 pid=6004 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:07.361000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:57:07.419087 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 25 14:57:07.426996 kernel: audit: type=1327 audit(1719327427.361:788): proctitle=737368643A20636F7265205B707269765D Jun 25 14:57:07.427102 kernel: audit: type=1105 audit(1719327427.424:789): pid=6004 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:07.424000 audit[6004]: USER_START pid=6004 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:07.426000 audit[6006]: CRED_ACQ pid=6006 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:07.466581 kernel: audit: type=1103 audit(1719327427.426:790): pid=6006 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:07.733752 sshd[6004]: pam_unix(sshd:session): session closed for user core Jun 25 14:57:07.734000 audit[6004]: USER_END pid=6004 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:07.738171 systemd-logind[1476]: Session 21 logged out. Waiting for processes to exit. Jun 25 14:57:07.739472 systemd[1]: session-21.scope: Deactivated successfully. Jun 25 14:57:07.740816 systemd-logind[1476]: Removed session 21. Jun 25 14:57:07.741530 systemd[1]: sshd@18-10.200.20.34:22-10.200.16.10:55934.service: Deactivated successfully. Jun 25 14:57:07.735000 audit[6004]: CRED_DISP pid=6004 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:07.776071 kernel: audit: type=1106 audit(1719327427.734:791): pid=6004 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:07.776213 kernel: audit: type=1104 audit(1719327427.735:792): pid=6004 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:07.740000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.20.34:22-10.200.16.10:55934 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:57:09.984000 audit[2733]: AVC avc: denied { watch } for pid=2733 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=3646736 scontext=system_u:system_r:container_t:s0:c134,c352 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:57:09.984000 audit[2733]: AVC avc: denied { watch } for pid=2733 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c134,c352 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:57:09.984000 audit[2733]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=70 a1=4004a65500 a2=fc6 a3=0 items=0 ppid=2546 pid=2733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c134,c352 key=(null) Jun 25 14:57:09.984000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E32302E3334002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 14:57:09.984000 audit[2733]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=6e a1=400731ba60 a2=fc6 a3=0 items=0 ppid=2546 pid=2733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c134,c352 key=(null) Jun 25 14:57:09.984000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E32302E3334002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 14:57:09.985000 audit[2733]: AVC avc: denied { watch } for pid=2733 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=3646740 scontext=system_u:system_r:container_t:s0:c134,c352 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:57:09.985000 audit[2733]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=70 a1=4004a65b60 a2=fc6 a3=0 items=0 ppid=2546 pid=2733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c134,c352 key=(null) Jun 25 14:57:09.985000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E32302E3334002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 14:57:09.989000 audit[2733]: AVC avc: denied { watch } for pid=2733 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=3646742 scontext=system_u:system_r:container_t:s0:c134,c352 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:57:09.989000 audit[2733]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=6e a1=4005244e70 a2=fc6 a3=0 items=0 ppid=2546 pid=2733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c134,c352 key=(null) Jun 25 14:57:09.989000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E32302E3334002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 14:57:10.020000 audit[2733]: AVC avc: denied { watch } for pid=2733 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c134,c352 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:57:10.020000 audit[2733]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=6e a1=400731bbe0 a2=fc6 a3=0 items=0 ppid=2546 pid=2733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c134,c352 key=(null) Jun 25 14:57:10.020000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E32302E3334002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 14:57:10.032000 audit[2733]: AVC avc: denied { watch } for pid=2733 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=3646740 scontext=system_u:system_r:container_t:s0:c134,c352 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:57:10.032000 audit[2733]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=6e a1=4004a65cb0 a2=fc6 a3=0 items=0 ppid=2546 pid=2733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c134,c352 key=(null) Jun 25 14:57:10.032000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E32302E3334002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 14:57:10.731000 audit[2694]: AVC avc: denied { watch } for pid=2694 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c10,c345 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:57:10.731000 audit[2694]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=4002ca6ea0 a2=fc6 a3=0 items=0 ppid=2545 pid=2694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c10,c345 key=(null) Jun 25 14:57:10.731000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:57:10.732000 audit[2694]: AVC avc: denied { watch } for pid=2694 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=3646740 scontext=system_u:system_r:container_t:s0:c10,c345 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:57:10.732000 audit[2694]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=4002e582d0 a2=fc6 a3=0 items=0 ppid=2545 pid=2694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c10,c345 key=(null) Jun 25 14:57:10.732000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:57:12.813912 systemd[1]: Started sshd@19-10.200.20.34:22-10.200.16.10:55950.service - OpenSSH per-connection server daemon (10.200.16.10:55950). Jun 25 14:57:12.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.34:22-10.200.16.10:55950 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:57:12.818136 kernel: kauditd_printk_skb: 25 callbacks suppressed Jun 25 14:57:12.818228 kernel: audit: type=1130 audit(1719327432.813:802): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.34:22-10.200.16.10:55950 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:57:13.220000 audit[6016]: USER_ACCT pid=6016 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:13.221829 sshd[6016]: Accepted publickey for core from 10.200.16.10 port 55950 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:57:13.223922 sshd[6016]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:57:13.247190 systemd-logind[1476]: New session 22 of user core. Jun 25 14:57:13.251008 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 25 14:57:13.222000 audit[6016]: CRED_ACQ pid=6016 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:13.270918 kernel: audit: type=1101 audit(1719327433.220:803): pid=6016 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:13.271048 kernel: audit: type=1103 audit(1719327433.222:804): pid=6016 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:13.284510 kernel: audit: type=1006 audit(1719327433.222:805): pid=6016 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jun 25 14:57:13.222000 audit[6016]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffff2f9040 a2=3 a3=1 items=0 ppid=1 pid=6016 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:13.306163 kernel: audit: type=1300 audit(1719327433.222:805): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffff2f9040 a2=3 a3=1 items=0 ppid=1 pid=6016 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:13.222000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:57:13.314634 kernel: audit: type=1327 audit(1719327433.222:805): proctitle=737368643A20636F7265205B707269765D Jun 25 14:57:13.287000 audit[6016]: USER_START pid=6016 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:13.336620 kernel: audit: type=1105 audit(1719327433.287:806): pid=6016 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:13.291000 audit[6018]: CRED_ACQ pid=6018 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:13.355466 kernel: audit: type=1103 audit(1719327433.291:807): pid=6018 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:13.629007 sshd[6016]: pam_unix(sshd:session): session closed for user core Jun 25 14:57:13.629000 audit[6016]: USER_END pid=6016 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:13.632554 systemd-logind[1476]: Session 22 logged out. Waiting for processes to exit. Jun 25 14:57:13.633823 systemd[1]: session-22.scope: Deactivated successfully. Jun 25 14:57:13.635169 systemd-logind[1476]: Removed session 22. Jun 25 14:57:13.635975 systemd[1]: sshd@19-10.200.20.34:22-10.200.16.10:55950.service: Deactivated successfully. Jun 25 14:57:13.629000 audit[6016]: CRED_DISP pid=6016 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:13.672845 kernel: audit: type=1106 audit(1719327433.629:808): pid=6016 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:13.672970 kernel: audit: type=1104 audit(1719327433.629:809): pid=6016 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:13.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.34:22-10.200.16.10:55950 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:57:13.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.34:22-10.200.16.10:55958 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:57:13.714212 systemd[1]: Started sshd@20-10.200.20.34:22-10.200.16.10:55958.service - OpenSSH per-connection server daemon (10.200.16.10:55958). Jun 25 14:57:14.158000 audit[6030]: USER_ACCT pid=6030 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:14.160609 sshd[6030]: Accepted publickey for core from 10.200.16.10 port 55958 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:57:14.161380 sshd[6030]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:57:14.160000 audit[6030]: CRED_ACQ pid=6030 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:14.160000 audit[6030]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffde88d40 a2=3 a3=1 items=0 ppid=1 pid=6030 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:14.160000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:57:14.166845 systemd-logind[1476]: New session 23 of user core. Jun 25 14:57:14.170993 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 25 14:57:14.175000 audit[6030]: USER_START pid=6030 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:14.177000 audit[6032]: CRED_ACQ pid=6032 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:14.664921 sshd[6030]: pam_unix(sshd:session): session closed for user core Jun 25 14:57:14.665000 audit[6030]: USER_END pid=6030 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:14.665000 audit[6030]: CRED_DISP pid=6030 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:14.668365 systemd[1]: sshd@20-10.200.20.34:22-10.200.16.10:55958.service: Deactivated successfully. Jun 25 14:57:14.669227 systemd[1]: session-23.scope: Deactivated successfully. Jun 25 14:57:14.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.34:22-10.200.16.10:55958 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:57:14.670157 systemd-logind[1476]: Session 23 logged out. Waiting for processes to exit. Jun 25 14:57:14.671050 systemd-logind[1476]: Removed session 23. Jun 25 14:57:14.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.34:22-10.200.16.10:56262 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:57:14.747072 systemd[1]: Started sshd@21-10.200.20.34:22-10.200.16.10:56262.service - OpenSSH per-connection server daemon (10.200.16.10:56262). Jun 25 14:57:15.196000 audit[6041]: USER_ACCT pid=6041 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:15.197104 sshd[6041]: Accepted publickey for core from 10.200.16.10 port 56262 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:57:15.197000 audit[6041]: CRED_ACQ pid=6041 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:15.197000 audit[6041]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff480ecf0 a2=3 a3=1 items=0 ppid=1 pid=6041 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:15.197000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:57:15.198837 sshd[6041]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:57:15.204337 systemd-logind[1476]: New session 24 of user core. Jun 25 14:57:15.205993 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 25 14:57:15.209000 audit[6041]: USER_START pid=6041 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:15.211000 audit[6048]: CRED_ACQ pid=6048 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:16.296000 audit[6058]: NETFILTER_CFG table=filter:134 family=2 entries=20 op=nft_register_rule pid=6058 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:57:16.296000 audit[6058]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11860 a0=3 a1=ffffe12b1650 a2=0 a3=1 items=0 ppid=3025 pid=6058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:16.296000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:57:16.297000 audit[6058]: NETFILTER_CFG table=nat:135 family=2 entries=22 op=nft_register_rule pid=6058 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:57:16.297000 audit[6058]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6540 a0=3 a1=ffffe12b1650 a2=0 a3=1 items=0 ppid=3025 pid=6058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:16.297000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:57:16.322000 audit[6060]: NETFILTER_CFG table=filter:136 family=2 entries=32 op=nft_register_rule pid=6060 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:57:16.322000 audit[6060]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11860 a0=3 a1=ffffeda9dbc0 a2=0 a3=1 items=0 ppid=3025 pid=6060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:16.322000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:57:16.323000 audit[6060]: NETFILTER_CFG table=nat:137 family=2 entries=22 op=nft_register_rule pid=6060 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:57:16.323000 audit[6060]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6540 a0=3 a1=ffffeda9dbc0 a2=0 a3=1 items=0 ppid=3025 pid=6060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:16.323000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:57:16.375597 sshd[6041]: pam_unix(sshd:session): session closed for user core Jun 25 14:57:16.376000 audit[6041]: USER_END pid=6041 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:16.376000 audit[6041]: CRED_DISP pid=6041 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:16.378000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.34:22-10.200.16.10:56262 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:57:16.378432 systemd-logind[1476]: Session 24 logged out. Waiting for processes to exit. Jun 25 14:57:16.378613 systemd[1]: sshd@21-10.200.20.34:22-10.200.16.10:56262.service: Deactivated successfully. Jun 25 14:57:16.379429 systemd[1]: session-24.scope: Deactivated successfully. Jun 25 14:57:16.380702 systemd-logind[1476]: Removed session 24. Jun 25 14:57:16.463395 systemd[1]: Started sshd@22-10.200.20.34:22-10.200.16.10:56276.service - OpenSSH per-connection server daemon (10.200.16.10:56276). Jun 25 14:57:16.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.34:22-10.200.16.10:56276 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:57:16.909000 audit[6063]: USER_ACCT pid=6063 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:16.910406 sshd[6063]: Accepted publickey for core from 10.200.16.10 port 56276 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:57:16.910000 audit[6063]: CRED_ACQ pid=6063 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:16.910000 audit[6063]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff7cfb070 a2=3 a3=1 items=0 ppid=1 pid=6063 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:16.910000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:57:16.912121 sshd[6063]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:57:16.916719 systemd-logind[1476]: New session 25 of user core. Jun 25 14:57:16.921977 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 25 14:57:16.925000 audit[6063]: USER_START pid=6063 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:16.927000 audit[6065]: CRED_ACQ pid=6065 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:17.479257 sshd[6063]: pam_unix(sshd:session): session closed for user core Jun 25 14:57:17.479000 audit[6063]: USER_END pid=6063 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:17.479000 audit[6063]: CRED_DISP pid=6063 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:17.482745 systemd-logind[1476]: Session 25 logged out. Waiting for processes to exit. Jun 25 14:57:17.482000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.34:22-10.200.16.10:56276 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:57:17.483515 systemd[1]: sshd@22-10.200.20.34:22-10.200.16.10:56276.service: Deactivated successfully. Jun 25 14:57:17.484337 systemd[1]: session-25.scope: Deactivated successfully. Jun 25 14:57:17.485679 systemd-logind[1476]: Removed session 25. Jun 25 14:57:17.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.34:22-10.200.16.10:56290 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:57:17.560961 systemd[1]: Started sshd@23-10.200.20.34:22-10.200.16.10:56290.service - OpenSSH per-connection server daemon (10.200.16.10:56290). Jun 25 14:57:18.002000 audit[6073]: USER_ACCT pid=6073 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:18.004005 sshd[6073]: Accepted publickey for core from 10.200.16.10 port 56290 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:57:18.007052 kernel: kauditd_printk_skb: 47 callbacks suppressed Jun 25 14:57:18.007139 kernel: audit: type=1101 audit(1719327438.002:843): pid=6073 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:18.009048 sshd[6073]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:57:18.006000 audit[6073]: CRED_ACQ pid=6073 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:18.032792 systemd-logind[1476]: New session 26 of user core. Jun 25 14:57:18.047486 kernel: audit: type=1103 audit(1719327438.006:844): pid=6073 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:18.059748 kernel: audit: type=1006 audit(1719327438.006:845): pid=6073 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jun 25 14:57:18.006000 audit[6073]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff5e82d40 a2=3 a3=1 items=0 ppid=1 pid=6073 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:18.062213 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 25 14:57:18.082819 kernel: audit: type=1300 audit(1719327438.006:845): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff5e82d40 a2=3 a3=1 items=0 ppid=1 pid=6073 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:18.006000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:57:18.090406 kernel: audit: type=1327 audit(1719327438.006:845): proctitle=737368643A20636F7265205B707269765D Jun 25 14:57:18.065000 audit[6073]: USER_START pid=6073 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:18.112798 kernel: audit: type=1105 audit(1719327438.065:846): pid=6073 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:18.080000 audit[6076]: CRED_ACQ pid=6076 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:18.132435 kernel: audit: type=1103 audit(1719327438.080:847): pid=6076 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:18.416945 sshd[6073]: pam_unix(sshd:session): session closed for user core Jun 25 14:57:18.417000 audit[6073]: USER_END pid=6073 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:18.419951 systemd-logind[1476]: Session 26 logged out. Waiting for processes to exit. Jun 25 14:57:18.421050 systemd[1]: session-26.scope: Deactivated successfully. Jun 25 14:57:18.422351 systemd-logind[1476]: Removed session 26. Jun 25 14:57:18.422903 systemd[1]: sshd@23-10.200.20.34:22-10.200.16.10:56290.service: Deactivated successfully. Jun 25 14:57:18.441827 kernel: audit: type=1106 audit(1719327438.417:848): pid=6073 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:18.441988 kernel: audit: type=1104 audit(1719327438.417:849): pid=6073 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:18.417000 audit[6073]: CRED_DISP pid=6073 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:18.422000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.34:22-10.200.16.10:56290 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:57:18.477732 kernel: audit: type=1131 audit(1719327438.422:850): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.34:22-10.200.16.10:56290 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:57:19.746769 systemd[1]: run-containerd-runc-k8s.io-c18e0067a66301b341060c63335cd979b8fea934046d43b19d5a30fa4f3247f5-runc.3Edc7M.mount: Deactivated successfully. Jun 25 14:57:23.499285 systemd[1]: Started sshd@24-10.200.20.34:22-10.200.16.10:56294.service - OpenSSH per-connection server daemon (10.200.16.10:56294). Jun 25 14:57:23.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.20.34:22-10.200.16.10:56294 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:57:23.520843 kernel: audit: type=1130 audit(1719327443.498:851): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.20.34:22-10.200.16.10:56294 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:57:23.948000 audit[6108]: USER_ACCT pid=6108 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:23.949121 sshd[6108]: Accepted publickey for core from 10.200.16.10 port 56294 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:57:23.970946 kernel: audit: type=1101 audit(1719327443.948:852): pid=6108 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:23.970000 audit[6108]: CRED_ACQ pid=6108 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:23.971894 sshd[6108]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:57:23.978297 systemd-logind[1476]: New session 27 of user core. Jun 25 14:57:24.006799 kernel: audit: type=1103 audit(1719327443.970:853): pid=6108 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:24.006858 kernel: audit: type=1006 audit(1719327443.970:854): pid=6108 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jun 25 14:57:24.006888 kernel: audit: type=1300 audit(1719327443.970:854): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdcdd1870 a2=3 a3=1 items=0 ppid=1 pid=6108 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:23.970000 audit[6108]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdcdd1870 a2=3 a3=1 items=0 ppid=1 pid=6108 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:24.006145 systemd[1]: Started session-27.scope - Session 27 of User core. Jun 25 14:57:23.970000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:57:24.041476 kernel: audit: type=1327 audit(1719327443.970:854): proctitle=737368643A20636F7265205B707269765D Jun 25 14:57:24.011000 audit[6108]: USER_START pid=6108 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:24.063735 kernel: audit: type=1105 audit(1719327444.011:855): pid=6108 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:24.013000 audit[6110]: CRED_ACQ pid=6110 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:24.083229 kernel: audit: type=1103 audit(1719327444.013:856): pid=6110 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:24.359440 sshd[6108]: pam_unix(sshd:session): session closed for user core Jun 25 14:57:24.359000 audit[6108]: USER_END pid=6108 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:24.362730 systemd-logind[1476]: Session 27 logged out. Waiting for processes to exit. Jun 25 14:57:24.363988 systemd[1]: session-27.scope: Deactivated successfully. Jun 25 14:57:24.365332 systemd-logind[1476]: Removed session 27. Jun 25 14:57:24.365971 systemd[1]: sshd@24-10.200.20.34:22-10.200.16.10:56294.service: Deactivated successfully. Jun 25 14:57:24.359000 audit[6108]: CRED_DISP pid=6108 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:24.400906 kernel: audit: type=1106 audit(1719327444.359:857): pid=6108 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:24.401014 kernel: audit: type=1104 audit(1719327444.359:858): pid=6108 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:24.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.20.34:22-10.200.16.10:56294 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:57:26.022000 audit[2694]: AVC avc: denied { watch } for pid=2694 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c10,c345 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:57:26.022000 audit[2694]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=400114e5a0 a2=fc6 a3=0 items=0 ppid=2545 pid=2694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c10,c345 key=(null) Jun 25 14:57:26.022000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:57:26.023000 audit[2694]: AVC avc: denied { watch } for pid=2694 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c10,c345 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:57:26.023000 audit[2694]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=b a1=4000a22760 a2=fc6 a3=0 items=0 ppid=2545 pid=2694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c10,c345 key=(null) Jun 25 14:57:26.023000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:57:26.023000 audit[2694]: AVC avc: denied { watch } for pid=2694 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c10,c345 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:57:26.023000 audit[2694]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=4000a22940 a2=fc6 a3=0 items=0 ppid=2545 pid=2694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c10,c345 key=(null) Jun 25 14:57:26.023000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:57:26.025000 audit[2694]: AVC avc: denied { watch } for pid=2694 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c10,c345 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:57:26.025000 audit[2694]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=400114e740 a2=fc6 a3=0 items=0 ppid=2545 pid=2694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c10,c345 key=(null) Jun 25 14:57:26.025000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:57:29.446213 systemd[1]: Started sshd@25-10.200.20.34:22-10.200.16.10:57414.service - OpenSSH per-connection server daemon (10.200.16.10:57414). Jun 25 14:57:29.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.20.34:22-10.200.16.10:57414 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:57:29.450800 kernel: kauditd_printk_skb: 13 callbacks suppressed Jun 25 14:57:29.450898 kernel: audit: type=1130 audit(1719327449.445:864): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.20.34:22-10.200.16.10:57414 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:57:29.900000 audit[6128]: USER_ACCT pid=6128 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:29.901731 sshd[6128]: Accepted publickey for core from 10.200.16.10 port 57414 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:57:29.903583 sshd[6128]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:57:29.926849 systemd-logind[1476]: New session 28 of user core. Jun 25 14:57:29.948506 kernel: audit: type=1101 audit(1719327449.900:865): pid=6128 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:29.948539 kernel: audit: type=1103 audit(1719327449.902:866): pid=6128 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:29.902000 audit[6128]: CRED_ACQ pid=6128 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:29.948188 systemd[1]: Started session-28.scope - Session 28 of User core. Jun 25 14:57:29.961870 kernel: audit: type=1006 audit(1719327449.902:867): pid=6128 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Jun 25 14:57:29.902000 audit[6128]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc4614b20 a2=3 a3=1 items=0 ppid=1 pid=6128 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:29.983040 kernel: audit: type=1300 audit(1719327449.902:867): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc4614b20 a2=3 a3=1 items=0 ppid=1 pid=6128 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:29.902000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:57:29.992173 kernel: audit: type=1327 audit(1719327449.902:867): proctitle=737368643A20636F7265205B707269765D Jun 25 14:57:29.953000 audit[6128]: USER_START pid=6128 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:30.015952 kernel: audit: type=1105 audit(1719327449.953:868): pid=6128 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:29.954000 audit[6130]: CRED_ACQ pid=6130 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:30.035439 kernel: audit: type=1103 audit(1719327449.954:869): pid=6130 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:30.308328 sshd[6128]: pam_unix(sshd:session): session closed for user core Jun 25 14:57:30.309000 audit[6128]: USER_END pid=6128 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:30.312529 systemd-logind[1476]: Session 28 logged out. Waiting for processes to exit. Jun 25 14:57:30.313909 systemd[1]: session-28.scope: Deactivated successfully. Jun 25 14:57:30.315110 systemd-logind[1476]: Removed session 28. Jun 25 14:57:30.315856 systemd[1]: sshd@25-10.200.20.34:22-10.200.16.10:57414.service: Deactivated successfully. Jun 25 14:57:30.309000 audit[6128]: CRED_DISP pid=6128 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:30.351747 kernel: audit: type=1106 audit(1719327450.309:870): pid=6128 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:30.351899 kernel: audit: type=1104 audit(1719327450.309:871): pid=6128 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:30.315000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.20.34:22-10.200.16.10:57414 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:57:33.486172 systemd[1]: run-containerd-runc-k8s.io-b0b8c45a1f1f6b9537bf08d95bea92f489c63eb0614a88497520d6fc93eb04e9-runc.XCBQ7H.mount: Deactivated successfully. Jun 25 14:57:35.388986 systemd[1]: Started sshd@26-10.200.20.34:22-10.200.16.10:51836.service - OpenSSH per-connection server daemon (10.200.16.10:51836). Jun 25 14:57:35.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.200.20.34:22-10.200.16.10:51836 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:57:35.395423 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 14:57:35.395490 kernel: audit: type=1130 audit(1719327455.388:873): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.200.20.34:22-10.200.16.10:51836 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:57:35.670000 audit[6168]: NETFILTER_CFG table=filter:138 family=2 entries=20 op=nft_register_rule pid=6168 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:57:35.670000 audit[6168]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=fffff4ef4aa0 a2=0 a3=1 items=0 ppid=3025 pid=6168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:35.707349 kernel: audit: type=1325 audit(1719327455.670:874): table=filter:138 family=2 entries=20 op=nft_register_rule pid=6168 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:57:35.707480 kernel: audit: type=1300 audit(1719327455.670:874): arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=fffff4ef4aa0 a2=0 a3=1 items=0 ppid=3025 pid=6168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:35.670000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:57:35.720448 kernel: audit: type=1327 audit(1719327455.670:874): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:57:35.674000 audit[6168]: NETFILTER_CFG table=nat:139 family=2 entries=106 op=nft_register_chain pid=6168 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:57:35.733253 kernel: audit: type=1325 audit(1719327455.674:875): table=nat:139 family=2 entries=106 op=nft_register_chain pid=6168 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:57:35.674000 audit[6168]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=49452 a0=3 a1=fffff4ef4aa0 a2=0 a3=1 items=0 ppid=3025 pid=6168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:35.757361 kernel: audit: type=1300 audit(1719327455.674:875): arch=c00000b7 syscall=211 success=yes exit=49452 a0=3 a1=fffff4ef4aa0 a2=0 a3=1 items=0 ppid=3025 pid=6168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:35.674000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:57:35.769917 kernel: audit: type=1327 audit(1719327455.674:875): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:57:35.856000 audit[6165]: USER_ACCT pid=6165 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:35.858581 sshd[6165]: Accepted publickey for core from 10.200.16.10 port 51836 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:57:35.859415 sshd[6165]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:57:35.858000 audit[6165]: CRED_ACQ pid=6165 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:35.882081 systemd-logind[1476]: New session 29 of user core. Jun 25 14:57:35.910282 kernel: audit: type=1101 audit(1719327455.856:876): pid=6165 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:35.910313 kernel: audit: type=1103 audit(1719327455.858:877): pid=6165 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:35.910331 kernel: audit: type=1006 audit(1719327455.858:878): pid=6165 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=29 res=1 Jun 25 14:57:35.858000 audit[6165]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff6542110 a2=3 a3=1 items=0 ppid=1 pid=6165 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:35.858000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:57:35.910151 systemd[1]: Started session-29.scope - Session 29 of User core. Jun 25 14:57:35.914000 audit[6165]: USER_START pid=6165 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:35.916000 audit[6170]: CRED_ACQ pid=6170 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:36.230589 sshd[6165]: pam_unix(sshd:session): session closed for user core Jun 25 14:57:36.230000 audit[6165]: USER_END pid=6165 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:36.231000 audit[6165]: CRED_DISP pid=6165 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:36.233420 systemd-logind[1476]: Session 29 logged out. Waiting for processes to exit. Jun 25 14:57:36.233616 systemd[1]: session-29.scope: Deactivated successfully. Jun 25 14:57:36.234239 systemd[1]: sshd@26-10.200.20.34:22-10.200.16.10:51836.service: Deactivated successfully. Jun 25 14:57:36.233000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.200.20.34:22-10.200.16.10:51836 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:57:36.235360 systemd-logind[1476]: Removed session 29. Jun 25 14:57:41.318097 systemd[1]: Started sshd@27-10.200.20.34:22-10.200.16.10:51840.service - OpenSSH per-connection server daemon (10.200.16.10:51840). Jun 25 14:57:41.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.200.20.34:22-10.200.16.10:51840 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:57:41.322514 kernel: kauditd_printk_skb: 7 callbacks suppressed Jun 25 14:57:41.322614 kernel: audit: type=1130 audit(1719327461.317:884): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.200.20.34:22-10.200.16.10:51840 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:57:41.767000 audit[6191]: USER_ACCT pid=6191 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:41.768632 sshd[6191]: Accepted publickey for core from 10.200.16.10 port 51840 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:57:41.789858 kernel: audit: type=1101 audit(1719327461.767:885): pid=6191 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:41.789000 audit[6191]: CRED_ACQ pid=6191 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:41.791239 sshd[6191]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:57:41.822971 kernel: audit: type=1103 audit(1719327461.789:886): pid=6191 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:41.823127 kernel: audit: type=1006 audit(1719327461.790:887): pid=6191 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=30 res=1 Jun 25 14:57:41.790000 audit[6191]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffee9d4570 a2=3 a3=1 items=0 ppid=1 pid=6191 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:41.844194 kernel: audit: type=1300 audit(1719327461.790:887): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffee9d4570 a2=3 a3=1 items=0 ppid=1 pid=6191 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:41.790000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:57:41.848129 systemd-logind[1476]: New session 30 of user core. Jun 25 14:57:41.859893 kernel: audit: type=1327 audit(1719327461.790:887): proctitle=737368643A20636F7265205B707269765D Jun 25 14:57:41.859076 systemd[1]: Started session-30.scope - Session 30 of User core. Jun 25 14:57:41.863000 audit[6191]: USER_START pid=6191 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:41.865000 audit[6193]: CRED_ACQ pid=6193 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:41.908707 kernel: audit: type=1105 audit(1719327461.863:888): pid=6191 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:41.908983 kernel: audit: type=1103 audit(1719327461.865:889): pid=6193 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:42.180235 sshd[6191]: pam_unix(sshd:session): session closed for user core Jun 25 14:57:42.180000 audit[6191]: USER_END pid=6191 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:42.183142 systemd[1]: session-30.scope: Deactivated successfully. Jun 25 14:57:42.184082 systemd[1]: sshd@27-10.200.20.34:22-10.200.16.10:51840.service: Deactivated successfully. Jun 25 14:57:42.206929 systemd-logind[1476]: Session 30 logged out. Waiting for processes to exit. Jun 25 14:57:42.180000 audit[6191]: CRED_DISP pid=6191 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:42.208161 systemd-logind[1476]: Removed session 30. Jun 25 14:57:42.225087 kernel: audit: type=1106 audit(1719327462.180:890): pid=6191 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:42.225184 kernel: audit: type=1104 audit(1719327462.180:891): pid=6191 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:42.183000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.200.20.34:22-10.200.16.10:51840 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:57:47.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.200.20.34:22-10.200.16.10:37742 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:57:47.266577 systemd[1]: Started sshd@28-10.200.20.34:22-10.200.16.10:37742.service - OpenSSH per-connection server daemon (10.200.16.10:37742). Jun 25 14:57:47.270764 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 14:57:47.270896 kernel: audit: type=1130 audit(1719327467.265:893): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.200.20.34:22-10.200.16.10:37742 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:57:47.715000 audit[6208]: USER_ACCT pid=6208 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:47.717885 sshd[6208]: Accepted publickey for core from 10.200.16.10 port 37742 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:57:47.719770 sshd[6208]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:57:47.725728 systemd-logind[1476]: New session 31 of user core. Jun 25 14:57:47.756682 kernel: audit: type=1101 audit(1719327467.715:894): pid=6208 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:47.756720 kernel: audit: type=1103 audit(1719327467.715:895): pid=6208 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:47.715000 audit[6208]: CRED_ACQ pid=6208 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:47.756118 systemd[1]: Started session-31.scope - Session 31 of User core. Jun 25 14:57:47.767712 kernel: audit: type=1006 audit(1719327467.715:896): pid=6208 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=31 res=1 Jun 25 14:57:47.715000 audit[6208]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffc7ff7b0 a2=3 a3=1 items=0 ppid=1 pid=6208 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:47.789240 kernel: audit: type=1300 audit(1719327467.715:896): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffc7ff7b0 a2=3 a3=1 items=0 ppid=1 pid=6208 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:47.715000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:57:47.797319 kernel: audit: type=1327 audit(1719327467.715:896): proctitle=737368643A20636F7265205B707269765D Jun 25 14:57:47.761000 audit[6208]: USER_START pid=6208 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:47.818633 kernel: audit: type=1105 audit(1719327467.761:897): pid=6208 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:47.761000 audit[6210]: CRED_ACQ pid=6210 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:47.837137 kernel: audit: type=1103 audit(1719327467.761:898): pid=6210 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:48.103039 sshd[6208]: pam_unix(sshd:session): session closed for user core Jun 25 14:57:48.103000 audit[6208]: USER_END pid=6208 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:48.106492 systemd-logind[1476]: Session 31 logged out. Waiting for processes to exit. Jun 25 14:57:48.107744 systemd[1]: session-31.scope: Deactivated successfully. Jun 25 14:57:48.109059 systemd-logind[1476]: Removed session 31. Jun 25 14:57:48.109773 systemd[1]: sshd@28-10.200.20.34:22-10.200.16.10:37742.service: Deactivated successfully. Jun 25 14:57:48.103000 audit[6208]: CRED_DISP pid=6208 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:48.143684 kernel: audit: type=1106 audit(1719327468.103:899): pid=6208 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:48.143815 kernel: audit: type=1104 audit(1719327468.103:900): pid=6208 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:48.109000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.200.20.34:22-10.200.16.10:37742 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:57:49.746154 systemd[1]: run-containerd-runc-k8s.io-c18e0067a66301b341060c63335cd979b8fea934046d43b19d5a30fa4f3247f5-runc.wOEJPQ.mount: Deactivated successfully. Jun 25 14:57:53.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.200.20.34:22-10.200.16.10:37752 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:57:53.202115 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 14:57:53.202189 kernel: audit: type=1130 audit(1719327473.176:902): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.200.20.34:22-10.200.16.10:37752 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:57:53.176835 systemd[1]: Started sshd@29-10.200.20.34:22-10.200.16.10:37752.service - OpenSSH per-connection server daemon (10.200.16.10:37752). Jun 25 14:57:53.590000 audit[6258]: USER_ACCT pid=6258 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:53.591115 sshd[6258]: Accepted publickey for core from 10.200.16.10 port 37752 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:57:53.610000 audit[6258]: CRED_ACQ pid=6258 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:53.611989 sshd[6258]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:57:53.618453 systemd-logind[1476]: New session 32 of user core. Jun 25 14:57:53.671357 kernel: audit: type=1101 audit(1719327473.590:903): pid=6258 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:53.671393 kernel: audit: type=1103 audit(1719327473.610:904): pid=6258 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:53.671413 kernel: audit: type=1006 audit(1719327473.610:905): pid=6258 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=32 res=1 Jun 25 14:57:53.671432 kernel: audit: type=1300 audit(1719327473.610:905): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe2054d10 a2=3 a3=1 items=0 ppid=1 pid=6258 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=32 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:53.671450 kernel: audit: type=1327 audit(1719327473.610:905): proctitle=737368643A20636F7265205B707269765D Jun 25 14:57:53.610000 audit[6258]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe2054d10 a2=3 a3=1 items=0 ppid=1 pid=6258 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=32 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:53.610000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:57:53.671064 systemd[1]: Started session-32.scope - Session 32 of User core. Jun 25 14:57:53.675000 audit[6258]: USER_START pid=6258 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:53.677000 audit[6260]: CRED_ACQ pid=6260 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:53.716541 kernel: audit: type=1105 audit(1719327473.675:906): pid=6258 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:53.716667 kernel: audit: type=1103 audit(1719327473.677:907): pid=6260 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:53.985747 sshd[6258]: pam_unix(sshd:session): session closed for user core Jun 25 14:57:53.986000 audit[6258]: USER_END pid=6258 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:53.990470 systemd[1]: session-32.scope: Deactivated successfully. Jun 25 14:57:53.991568 systemd[1]: sshd@29-10.200.20.34:22-10.200.16.10:37752.service: Deactivated successfully. Jun 25 14:57:53.987000 audit[6258]: CRED_DISP pid=6258 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:54.010457 systemd-logind[1476]: Session 32 logged out. Waiting for processes to exit. Jun 25 14:57:54.011673 systemd-logind[1476]: Removed session 32. Jun 25 14:57:54.027456 kernel: audit: type=1106 audit(1719327473.986:908): pid=6258 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:54.027611 kernel: audit: type=1104 audit(1719327473.987:909): pid=6258 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:53.991000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.200.20.34:22-10.200.16.10:37752 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:57:59.068858 systemd[1]: Started sshd@30-10.200.20.34:22-10.200.16.10:42146.service - OpenSSH per-connection server daemon (10.200.16.10:42146). Jun 25 14:57:59.092688 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 14:57:59.092830 kernel: audit: type=1130 audit(1719327479.068:911): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.200.20.34:22-10.200.16.10:42146 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:57:59.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.200.20.34:22-10.200.16.10:42146 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:57:59.509000 audit[6279]: USER_ACCT pid=6279 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:59.511028 sshd[6279]: Accepted publickey for core from 10.200.16.10 port 42146 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:57:59.518406 sshd[6279]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:57:59.517000 audit[6279]: CRED_ACQ pid=6279 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:59.550477 kernel: audit: type=1101 audit(1719327479.509:912): pid=6279 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:59.550602 kernel: audit: type=1103 audit(1719327479.517:913): pid=6279 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:59.556880 systemd-logind[1476]: New session 33 of user core. Jun 25 14:57:59.593548 kernel: audit: type=1006 audit(1719327479.517:914): pid=6279 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=33 res=1 Jun 25 14:57:59.593584 kernel: audit: type=1300 audit(1719327479.517:914): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff9234760 a2=3 a3=1 items=0 ppid=1 pid=6279 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=33 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:59.593612 kernel: audit: type=1327 audit(1719327479.517:914): proctitle=737368643A20636F7265205B707269765D Jun 25 14:57:59.517000 audit[6279]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff9234760 a2=3 a3=1 items=0 ppid=1 pid=6279 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=33 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:59.517000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:57:59.594230 systemd[1]: Started session-33.scope - Session 33 of User core. Jun 25 14:57:59.598000 audit[6279]: USER_START pid=6279 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:59.622000 audit[6281]: CRED_ACQ pid=6281 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:59.642516 kernel: audit: type=1105 audit(1719327479.598:915): pid=6279 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:59.642626 kernel: audit: type=1103 audit(1719327479.622:916): pid=6281 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:59.951094 sshd[6279]: pam_unix(sshd:session): session closed for user core Jun 25 14:57:59.951000 audit[6279]: USER_END pid=6279 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:59.953000 audit[6279]: CRED_DISP pid=6279 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:59.976182 systemd[1]: sshd@30-10.200.20.34:22-10.200.16.10:42146.service: Deactivated successfully. Jun 25 14:57:59.977024 systemd[1]: session-33.scope: Deactivated successfully. Jun 25 14:57:59.994086 kernel: audit: type=1106 audit(1719327479.951:917): pid=6279 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:59.994215 kernel: audit: type=1104 audit(1719327479.953:918): pid=6279 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:57:59.975000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.200.20.34:22-10.200.16.10:42146 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:57:59.995668 systemd-logind[1476]: Session 33 logged out. Waiting for processes to exit. Jun 25 14:57:59.996542 systemd-logind[1476]: Removed session 33. Jun 25 14:58:03.472400 systemd[1]: run-containerd-runc-k8s.io-b0b8c45a1f1f6b9537bf08d95bea92f489c63eb0614a88497520d6fc93eb04e9-runc.xWFeCg.mount: Deactivated successfully. Jun 25 14:58:05.025738 systemd[1]: Started sshd@31-10.200.20.34:22-10.200.16.10:43184.service - OpenSSH per-connection server daemon (10.200.16.10:43184). Jun 25 14:58:05.048973 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 14:58:05.049098 kernel: audit: type=1130 audit(1719327485.025:920): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.200.20.34:22-10.200.16.10:43184 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:58:05.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.200.20.34:22-10.200.16.10:43184 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:58:05.432000 audit[6312]: USER_ACCT pid=6312 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:05.435351 sshd[6312]: Accepted publickey for core from 10.200.16.10 port 43184 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:58:05.455000 audit[6312]: CRED_ACQ pid=6312 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:05.457333 sshd[6312]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:58:05.475077 kernel: audit: type=1101 audit(1719327485.432:921): pid=6312 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:05.475195 kernel: audit: type=1103 audit(1719327485.455:922): pid=6312 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:05.488712 kernel: audit: type=1006 audit(1719327485.456:923): pid=6312 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=34 res=1 Jun 25 14:58:05.456000 audit[6312]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc2d77300 a2=3 a3=1 items=0 ppid=1 pid=6312 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=34 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:58:05.510431 kernel: audit: type=1300 audit(1719327485.456:923): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc2d77300 a2=3 a3=1 items=0 ppid=1 pid=6312 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=34 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:58:05.456000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:58:05.518965 kernel: audit: type=1327 audit(1719327485.456:923): proctitle=737368643A20636F7265205B707269765D Jun 25 14:58:05.522345 systemd-logind[1476]: New session 34 of user core. Jun 25 14:58:05.528962 systemd[1]: Started session-34.scope - Session 34 of User core. Jun 25 14:58:05.533000 audit[6312]: USER_START pid=6312 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:05.556000 audit[6314]: CRED_ACQ pid=6314 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:05.577808 kernel: audit: type=1105 audit(1719327485.533:924): pid=6312 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:05.577933 kernel: audit: type=1103 audit(1719327485.556:925): pid=6314 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:05.855425 sshd[6312]: pam_unix(sshd:session): session closed for user core Jun 25 14:58:05.855000 audit[6312]: USER_END pid=6312 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:05.858559 systemd-logind[1476]: Session 34 logged out. Waiting for processes to exit. Jun 25 14:58:05.859681 systemd[1]: session-34.scope: Deactivated successfully. Jun 25 14:58:05.860800 systemd-logind[1476]: Removed session 34. Jun 25 14:58:05.861304 systemd[1]: sshd@31-10.200.20.34:22-10.200.16.10:43184.service: Deactivated successfully. Jun 25 14:58:05.855000 audit[6312]: CRED_DISP pid=6312 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:05.896503 kernel: audit: type=1106 audit(1719327485.855:926): pid=6312 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:05.896621 kernel: audit: type=1104 audit(1719327485.855:927): pid=6312 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:05.860000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.200.20.34:22-10.200.16.10:43184 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'